OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 07:49

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Research fraud and its combat: what can a journal do?

2013·14 Zitationen·Medical EducationOpen Access
Volltext beim Verlag öffnen

14

Zitationen

7

Autoren

2013

Jahr

Abstract

In 2011, an accomplished Dutch professor of social psychology, Diederik Stapel, was unmasked as an unprecedented scientific swindler. The shock waves in the Dutch scientific community caused by the first news of his decade-long behaviour led to investigations in three universities where he had been appointed, and resulted in astonishing revelations.1 According to this report by Levelt et al.,1 Stapel was generally regarded as an esteemed, successful scientist. He had been characterised in the past as a ‘golden boy’: charismatic, ambitious, brilliant and creative in designing research, but also authoritative and somewhat peremptory. However, in August 2011, three junior researchers from Stapel's group reported to the department chair their suspicion that Stapel had substantially falsified data. Within weeks, the university rector established a committee to investigate thoroughly the scientific conduct of Stapel, in collaboration with similar committees at two other universities that had previously employed Stapel. A re-examination of more than 100 of Stapel's publications from the past decade revealed that experimental designs would have often been difficult or impossible to execute in practice. Studies were found to regularly lack missing data (highly unlikely in social science studies), datasets had been found copied across different studies, and improbably high effect sizes and correlations had been reported, compared with similar studies in the domain. In some cases, research had not been carried out; survey data had been fully fabricated or data were partially deleted or replaced if not contributing to desired outcomes. In virtually all cases, co-authors had been misled. Clear evidence of fraud was found in 55 publications, most of which were published in respected journals (including Science), as well as in 10 PhD theses supervised by Stapel. Several other publications were also suspected of fraud. When co-authors were interviewed by an investigating committee, in retrospect they recognised that a combination of personal friendship and supervisor pressure had led them to believe that Stapel's personal contributions to their studies represented true and accurate data.1 Stapel was dismissed from his university appointments and lost his PhD degree. His full cooperation with the investigations and his admission of all reported misbehaviour, including in a recent best-selling book of self-examination,2 cannot reduce the huge damage caused by his conduct. Not only theoreticians and experimental researchers who had built on his work have been affected, but also the credibility of social science as a whole. The Levelt report1 condemns Stapel's behaviour, and with it the research culture that makes such fraud possible. After re-examining many studies, the committee judged social psychology to have a ‘sloppy science’ culture, which includes a lack of critical analysis of its own findings, a focus on ad hoc ‘interesting’ findings with little theoretical basis, a lack of replication of studies and other methods to verify the validity of findings, incomplete or distorted descriptions of experimental procedures, deletion of findings that do not support a hypothesis, careless or incompetent use of statistics and too frequent removal of outlier data. More generally, the committee criticised a lack of critical review of research manuscripts and the uncritical conduct of journal editors. Of course, social psychology is not the only domain in which fraud has been found. In biomedical science, the majority of retracted articles by journals show signs of deliberate misconduct (data falsification, data fabrication, plagiarism).3 Many fraudulent published reports probably go unrecognised. A single publication showing misconduct by an author is probably part of a larger problem and some say should lead to an investigation of all papers by this author.4 This happened in Stapel's case, but should be a general policy.5 Scientific misconduct has been called an epidemic5 and is probably more widespread than we like to acknowledge. How should this news affect the policy of medical education journals? Stapel was co-author of two recent papers in this domain,6, 7 both of which have been re-examined by the Levelt committee and, fortunately, reported as not affected by fraudulent behaviour.1 However, all or most of the experimental studies reported in medical education journals contain data that have not been checked by independent reviewers, and much of the interaction of authors with editors and reviewers builds on trusting the integrity and responsibility of researchers. Given the rapid increase of publications in the medical education domain (there are now an estimated 25+ journals in the field), the chances of publications containing fraudulent data increase too. Editors and reviewers have their limitations; it is impossible for them to fully exclude fraud. Even with high levels of suspicion, it may be extremely difficult to prove misconduct.8 Reviewers and editors can hardly ever check the validity of reported data provided by authors. So what can be done? Medical education research is largely a social science. Empirical studies in social science require data collection, data processing, often data selection, statistical processing, interpretation and reporting. All these stages are vulnerable to misconduct. What policy can a journal employ to minimise the risk of fraudulent data manipulation? The Quality and Standards Advisory Group of Medical Education was formed in 2007 to advise on matters of integrity related to the journal and would like to suggest a few approaches, that, although not preventing fraud, may reduce its probability. We offer the following suggestions, to open up a discussion. First and foremost, the prevention of misconduct should begin in the microsystem of research teams. Supervisors and co-authors should stress the need for transparency of data collection and should review data. Doing so not only teaches general ethical conduct, but increases the validity of research conclusions, and reduces the impact that invalid research can have on the research domain, and on the authors. In most of the cases described above, co-authors were not aware of the manipulations of the primary researcher or the author providing the data. By submitting a manuscript, all authors implicitly confirm their faith in the accuracy of the data collected and processed. However, faith, apparently, is not enough. In many instances, authors have relationships that discourage double checking of data. These may be a hierarchical relationship, a friendship or simple collegiality. By requiring a statement from one or more co-authors that they have examined the data and confirm that no manipulation has occurred, a barrier to fraudulent data manipulation could be established. In any revelation of misconduct, all authors should bear some responsibility. In many cases, one or more of the authors have been distant to the data gathering process. The primary investigator should, however, always be ready to share any source files requested by any co-author. Seldom, if ever, do reviewers of journal submissions receive or request to see original data. Seeing data does not guarantee their accuracy, but it may show peculiarities that could lead to further investigation and simply asking for them may have a preventive effect. Authors may be asked to routinely submit data files with original empirical studies, clearly formatted to allow for transparent review, including how missing data and outliers have been handled. Although many reviewers may not have the time or the expertise to check data or re-run analyses, on occasion, perhaps determined by the editor, a paid external statistician might be consulted. Conversely, reviewing may become not only more burdensome, but also more interesting and meaningful if reviewers can see more of the details of the study than readers of the journal. Even if they only glance through data, the preventive effect of this potential scrutiny could be significant. This may need to be incorporated into ethical approval routines, but that, in itself, could also have an early preventive effect. Another option might be the requirement to receive a signed review report by an independent researcher confirming the adequacy of data, in conjunction with the submission of empirical manuscripts. This lays the responsibility on the authors (where it primarily should be) rather than on the journal. Alternatively, ethical review committees could take up this task, but that would broaden their focus from the protection of research subjects to the confirmation of data integrity. It is questionable whether this is feasible in practice. A journal should encourage the publication of well-performed studies that show ‘no significant effect’. A journal's desire to have the first news of a finding should not be the determining factor in decisions to publish. Comprehensive systematic reviews and meta-analyses simply must include all known high-quality studies. Hence, no-effect studies must be reported and not rejected on the grounds of a lack of significant results. This should discourage researchers from manipulating data in order to reach significance and increased chances of publication by being able to draw impressive or provocative conclusions. Of course, a ‘no significant effect’ conclusion because of insufficient statistical power should not be accepted. In many domains, the quest for securing funding for research and publishing the results has become a strong and primarily quantitative requirement for researchers, overshadowing scientific integrity. The approaches we suggest do not change the reality of a highly competitive scientific world, probably (and fortunately) more dominant outside medical education than within our area of research, but they may limit its impact on the quality and standards of journals like Medical Education.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Academic integrity and plagiarismClinical Reasoning and Diagnostic SkillsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen