OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 18.03.2026, 20:20

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

We need more research on causes and consequences, as well as on solutions

2014·10 Zitationen·AddictionOpen Access
Volltext beim Verlag öffnen

10

Zitationen

1

Autoren

2014

Jahr

Abstract

There is no direct evidence that the reproducibility of published findings has actually declined, or that bias and misconduct have increased in frequency. The recent rise in retractions, typically invoked as evidence of an epidemic of fraud, is instead accounted for entirely by the increasing number of journals who implement policies to retract papers, and therefore should be interpreted as a positive sign 1. The most reliable evidence of a growing problem comes from two independent studies 2, 3 that measured the prevalence of reported ‘positive’ or statistically significant results in electronic databases, using different proxies. Both these studies show that positive-outcome bias, at least in scientific abstracts, has grown in most disciplines and countries. It is still unclear, however, if and to what extent this growth in literature biases reflects a growth in actual significance chasing, selection or manipulation of data. The rate at which scientists admit to having committed various forms of misconduct, for example, has declined, not increased, over the years 4. Evidence derived from retractions might also be highly misleading on this point. Higher retraction rates in high-impact journals can be explained somewhat trivially, not just by higher scrutiny but by the fact that, historically, high-impact journals were the first to establish policies for retracting their own papers, and to this day are the most rigorous in implementing them 1, 5-8. The finding that in some fields high-impact journals are more likely to publish effects of extreme magnitude may or may not represent a problem, depending on what the low-impact journals are publishing and how well scientists within those fields can access all available results. Based on a similar logic, some authors remain unconvinced that the file-drawer phenomenon is a problem at all (e.g. 7). Retractions, once again, can tell us very little. While some of the countries with publication-incentivising policies also seem to issue more retractions 8, this correlation could simply reflect the presence in these countries of better policies and structures to deal with misconduct. There is some evidence that submissions to high-impact journals such as Science have increased, particularly from countries where publications are rewarded financially, while acceptance rates from these countries have not increased 9. This, however, does not prove that research from these countries is increasingly flawed or fraudulent. Stronger evidence comes from studies suggesting that: (i) positive results are published more in US states characterized by high academic ‘productivity’ 10; (ii) the magnitude of reported effects correlates with the research and development (R&D) expenditure of the corresponding author's country 11; and (iii) the United States might suffer in some fields from higher biases than European countries 12, 13. These studies, however, show only correlation, not causation. Even more subtly, such studies are at risk of the ecological fallacy: correlations between reported results and individual study characteristics might turn out to go in the opposite direction and/or identify completely different predictors of publication bias. Since it is ultimately people, not countries, that publish results, such lower-level correlations would be more informative about what ‘causes’ bias and misconduct. These somewhat sceptical remarks are not intended to diminish the concern regarding bias and misconduct in research, but only to point out that, while such problems are plausible and potentially very serious in principle, in practice we still have few certainties about them. Given the complexity and diversity of contemporary scientific research, it seems reasonable to start from the assumption that the prevalence, magnitude and causes of bias and misconduct vary by field and country, in ways that could, should and need to be assessed empirically. Unless (or until) future research manages to identify a single factor as the prominent source of waste and bias in science, it is more conservative to assume that these ills will be cured not by a single intervention but by combinations of interventions, each combination tailored to specific fields and contexts (see also 14). Therefore, while I completely support Ware & Munafò's suggestion that Addiction as well as other journals try forms of pre-study registration, I would also recommend experimenting more broadly; for example, assessing the efficacy of results-blind peer-review 15, post-publication peer-review 16 and reporting guidelines 17, 18. None.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Academic integrity and plagiarismArtificial Intelligence in Healthcare and EducationAcademic Publishing and Open Access
Volltext beim Verlag öffnen