Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Preserving Scientific Integrity in Academic Publishing: Navigating Artificial Intelligence, Journal Policies, and the Impact Factor as a Quality Indicator
0
Zitationen
13
Autoren
2025
Jahr
Abstract
The integration of artificial intelligence (AI), the rise of mega-journals, and the manipulation of impact factors present challenges to scientific integrity. These trends threaten the core principles of objectivity, reproducibility, and transparency. This paper highlights two categories of threats: (1) external pressures, such as AI misuse and metric-driven publishing models, and (2) internal systemic flaws, including the 'publish or perish' culture and methodological fragility. Mega-journals, characterized by high-volume publishing and broad interdisciplinary scopes, improve accessibility and accelerate dissemination. However, the emphasis on publication volume might weaken the rigor of peer review. To navigate these challenges, the authors propose a balanced approach that harnesses innovation without compromising scientific integrity. Proposed solutions include mandating AI transparency through frameworks like Consolidated Standards of Reporting Trials-AI, and redefining impact metrics to emphasize reproducibility, mentorship, and societal impact alongside citations. Scientific journals should promote career opportunities less on publication quantity and more on quality. Global cooperation, via initiatives like the San Francisco Declaration on Research Assessment and the Committee on Publication Ethics, is essential to standardize ethics and address resource disparities. This paper proposes solutions for researchers, journals, and policymakers to realign academic incentives and uphold the ethical foundation of the science. By fostering transparency, accountability, and equity, the scientific community can preserve its ethical foundations while embracing transformative tools-ultimately advancing knowledge and serving society. LEVEL OF EVIDENCE: V.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.
Autoren
Institutionen
- Sağlık Bilimleri Üniversitesi(TR)
- Fatih Sultan Mehmet Eğitim Ve Araştırma Hastanesi(TR)
- Istituto Ortopedico Rizzoli(IT)
- Sinai Hospital(US)
- McMaster University(CA)
- Goethe University Frankfurt(DE)
- University Hospital Frankfurt(DE)
- German Red Cross(DE)
- Saarland University(DE)
- University of Münster(DE)
- Klinikum Brandenburg(DE)