Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Defining the Boundaries of AI Use in Scientific Writing: A Comparative Review of Editorial Policies
28
Zitationen
1
Autoren
2025
Jahr
Abstract
The rapid rise of generative artificial intelligence (AI) is fundamentally transforming the landscape of medical writing and publishing. In response, major academic organizations and high-impact journals have released guidelines addressing core ethical concerns, including authorship qualification, disclosure of AI use, and the attribution of accountability. This review analyzes and compares key statements from several international medical or scientific editors' organizations along with submission policies of major leading journals. It also evaluates the AI usage policy of the <i>Journal of Korean Medical Science</i> (<i>JKMS</i>), which presents one of the most specific frameworks among Korean journals, and offers suggestions for refinement. While most journals prohibit listing AI tools as authors, their stance on AI-assisted writing varies. <i>JKMS</i> aligns with international norms by prohibiting AI authorship and recommending that authors explicitly report the tool name, prompt, purpose, and scope of AI use. This policy demonstrates a flexible but principled approach to AI integration. The limitations of AI detection tools are also discussed. These tools often struggle with accuracy and bias, with known tendencies to misclassify human-written content as AI-generated. As such, sole reliance on detection tools is insufficient for editorial decisions. Instead, fostering a culture of ethical authorship and responsible disclosure remains essential. This review highlights the need for balanced policies that promote transparency without impeding innovation. By clarifying disclosure expectations and reinforcing human accountability, journals can guide the ethical use of AI in scientific writing and maintain the integrity of scholarly communication.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.460 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.341 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.791 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.536 Zit.