Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Comparative Analysis of Author Guidelines on the Use of Generative Artificial Intelligence for Manuscript Preparation in the Top 100 Medical Journals
0
Zitationen
2
Autoren
2025
Jahr
Abstract
We conducted a cross-sectional analysis of author guidelines from the top 100 medical journals by SCImago Journal Rank to evaluate the coverage and content of policies related to generative artificial intelligence (GAI). Among the journals analyzed (median impact factor, 24.8), 76% permitted GAI for language editing, whereas fewer allowed it for drafting text (26%), figure or table creation (22%), or data analysis (12%). Most journals (78%) explicitly prohibited the use of GAI to generate entire manuscripts. Disclosure of GAI use was required by 78% of journals, although only 16% provided specific disclosure formats. Most journals (80%) assigned responsibility for final content to human authors and prohibited listing GAI as an author. Only 33% of journals referenced external ethical frameworks, with the International Committee of Medical Journal Editors (ICMJE; 16%) and Committee on Publication Ethics (COPE; 12%) being the most commonly referenced. Publisher identity strongly predicted policy adoption across all dimensions (Cramér’s V > 0.8 for multiple policy areas). Moreover, geographic region was moderately associated with GAI policies. However, journal impact metrics showed limited correlation with GAI policy stringency. Permitting a broader use of GAI, especially for language editing and manuscript generation, was strongly correlated with mandatory disclosure requirements. Although most medical journals have established GAI policies, significant gaps remain in comprehensiveness and specificity. The strong publisher-driven pattern suggests opportunities for developing harmonized, specialty-specific standards.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.