OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 06:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Artificial intelligence in academic writing: Insights from journal publishers’ guidelines

2024·7 Zitationen·Perspectives in Clinical ResearchOpen Access
Volltext beim Verlag öffnen

7

Zitationen

3

Autoren

2024

Jahr

Abstract

INTRODUCTION Generative artificial intelligence (AI) technologies have the potential to be incorporated into scientific research and scholarly writing. Large language models (LLMs), such as ChatGPT, Gemini, and Copilot, have created a ripple in the scientific writing process, as these LLM-based freely accessible chatbots are capable of generating content at very high speeds that humans may never achieve.[1] However, a question remains in authors’ mind – is it ethical to use AI in writing process? To find answer, we analyzed the available guidelines of journal publishers regarding the use of AI in manuscript preparation. METHODS This was a cross-sectional audit of public domain data available freely on the journal or publisher’s websites (cutoff April 15, 2024). Two authors individually made their list of 20 prominent (from internationally reputed journals and gained knowledge based on previous literature and Internet search) publishers.[2] A consensus was reached to make a final list of 20 publishers and their websites were searched for guidelines regarding the use of AI in the writing process. The list of the publishers can be accessed from https://doi.org/10.6084/m9.figshare.25975279.v1. Themes were identified from the text in QDA Miner Lite v3.0.5 (Provalis Research, Montreal, Canada). RESULTS From publishers’ guidelines on the use of AI in manuscripts, we have identified a total of six themes described below. Responsibility Authors are expected to use AI tools responsibly, with human oversight, to ensure the accuracy, validity, and integrity of the content. Elsevier mentioned that authors “should carefully review and edit the result” before using it in the manuscript. Authorship AI tools including generative AI like LLMs cannot fulfill the criteria for authorship according to the guidelines set by the International Committee of Medical Journal Editors criteria of authorship. Scientific Scholar suggests not adding chatbots as authors as it does not fulfill the ICMJE criteria and along with that, it does not have “affiliation independent of their developers.” Declaration Authors are required to disclose the use of AI tools in their manuscripts, including details such as the name, version, and purpose of the AI tool used. Springer suggests that the use of LLM should be “properly documented in the methods section.” Wiley suggests adding details in “the methods section (or via a disclosure or within the acknowledgments section, as applicable).” Productivity Copywriting and copyediting are two interlaced parts of a manuscript. AI can help in both. Taylor and Francis admits that gradually AI is being assimilated into academic writing and its proper use has “the potential to augment research outputs and thus foster progress through knowledge.” Limitation There are several limitations to using AI in academic writing. SAGE pointed out that AI chatbots including “LLMs can ‘hallucinate,’ i.e. generate false content” and they “can generate content that is linguistically but not scientifically plausible.” Future prospect The role of AI in research and scholarly publishing is evolving, suggesting that AI will become increasingly integrated into the publishing process. MDPI predicts that “in a few years, AI will become the norm, like how the Internet or Google are now.” DISCUSSION Committee on publication ethics suggests transparent declaration of AI with details of the tool and agrees that “the use of AI tools such as ChatGPT or LLMs in research publications is expanding rapidly.”[3] The World Association of Medical Editors has suggested that authors can use AI for a variety of tasks like “(1) simple word-processing tasks, (2) the generation of ideas and text, and (3) substantive research.”[4] From publishers’ guidelines, it is evident that there is no prohibition against the acceptance of AI-generated content in general. However, as AI is not an author according to ICMJE criteria, authors should check accuracy and plagiarism and edit the content before using it in the manuscript. Authors bear the responsibility for the content they publish and should ensure transparent declaration or acknowledgement of the help taken from the AI. Financial support and sponsorship Nil. Conflicts of interest There are no conflicts of interest.

Ähnliche Arbeiten