OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 17:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Appropriate Use and Reporting of AI tools in Manuscript Preparation

2026·0 Zitationen·Saudi Journal of Medicine and Medical SciencesOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

Artificial-intelligence (AI)-assisted technology is having a transformative effect on almost all fields, often resulting in changes in established workflows. Similarly, AI-assisted technology/chatbots are rapidly being adopted in scientific publishing, by authors, publishers/journals, and readers. For researchers, AI use allows for a faster route from hypothesis/raw data to published articles. A very recent survey by Elsevier that included 3234 active researchers from 113 countries found that, in 2025, 58% of the researchers used AI tools for work (up from 37% in 2024), 58% stated that it currently saves them time, and 69% expected its use to save them time in the next 2–3 years.[1] Therefore, the wide adoption of AI in preparation of manuscripts is largely expected; however, to maintain scholarly rigor, integrity, and data confidentiality, researchers should be mindful of certain aspects. Through this Editorial, we provide a quick snapshot of current recommendations, best practices, and considerations when using AI in the preparation of manuscripts. Current Recommendations and Real-Life Practices According to all commonly used ethical/editorial recommendations, such as ICMJE, WAME, COPE, STM, and EASE guidelines, AI cannot be credited as a co-author of a manuscript. In addition, at the time of submission, authors are required to disclose their use of AI-assisted technology in Material and Methods, Acknowledgment, or a separate AI-declaration section (depending on its use and journal policy).[2-6] It should also be noted that AI cannot be used to create, alter, or manipulate data/results or images derived directly from research. Further, AI-generated images may require additional checks by editors/publishers.[4,6] In fact, to protect scientific integrity and avoid copyright issues, several publishers/journals do not allow submissions to provide images created or significantly altered using AI-assisted tools, with certain exceptions.[7,8] Despite their usefulness, it is now well-known that AI-assisted technology can often fabricate facts (based on the type of prompts) and references. This is particularly concerning given that, according to Elsevier’s “Researcher of the Future” report, AI is most commonly used for literature reviews (51%),[1] indicating heavy reliance on facts and references provided by AI. Therefore, in essence, all major guidelines suggest that although use of AI is at the discretion of authors (especially for text, but less so for images), human oversight and control is necessary to independently verify all content refined/generated using AI because, ultimately, human authors are responsible (legally and ethically) for the scientific correctness, integrity, and originality of published manuscripts. Why, when, and how should AI use be reported Use of AI in preparing manuscript is underreported by authors, and science sleuths have found several hundreds of such papers.[9] This is further reiterated in a survey of >5000 researchers by Nature, which found that those who use AI for preparing manuscripts often do not disclose its use.[10] This could be due to gaps in knowledge regarding the need for such reporting or hesitancy among authors. However, it is important to note that reporting the use of AI in preparing a manuscript would ideally not result in discrimination against that manuscript, but would rather increase transparency and trust between authors, editors, reviewers, and readers, which is a scientific norm. Also, given that current AI-detection software do not have adequate efficiency and accuracy,[11,12] the onus is on authors to report AI use at the time of submitting manuscripts to support best practices. Although the use of AI for grammar, spelling, punctuation, and syntax checks often does not require declaration, it is preferred that AI use is reported when it has been used to substantially modify sentences and structure (i.e. for substantive or scientific editing).[2,4] In addition, its use should be reported for content/image generation, assistance with literature review, gathering of references, and data analysis/interpretation. Current recommendations on which aspects of AI use should be reported can vary across publishers/journals. Therefore, the instructions to authors of each journal should be consulted before submission. Very recently, the STM Association has provided editors with nine recommended classifications of AI use in academic manuscript preparation, and this could be adopted by several publishers/journals.[13] Currently, these recommended classifications could also serve authors in framing their reporting sentences based on the description of the activity. Journals/editors could also adopt the GAIDeT taxonomy that has specifically been designed for documenting task delegations to AI within research workflow.[14] While the reporting guidelines on AI use in manuscript preparation is being consolidated, to meet differing journal requirements on declarations, authors could draft a comprehensive AI-use declaration in advance and keep it available for submission when required. It is also recommended that authors keep a backup of the AI prompts used during manuscript preparation, as some journals may request them to be provided as supplementary material. Plagiarism and data confidentiality considerations AI-generated text may include block quotations of copyrighted third-party material, which could infringe on the rights of the copyright holder and/or constitute plagiarism. Therefore, authors are required to ensure that AI-generated content does not result in plagiarism or copyright infringement. Before uploading unpublished research material/data on AI tools, researchers should consider the privacy and confidentiality of their data and inputs, especially when this pertains to novel ideas/hypothesis and use of identifiable data. Researchers can circumvent this issue by checking if the AI tool uses the submitted data for training purposes; if yes, this would mean that the uploaded data could be reused by the AI tool and produced in future outputs for other users without appropriate copyright attribution or referencing.[4] Researchers are more likely to benefit from using academic-specific AI tools, rather than general large language models. This is because academic-specific AI tools are tailored for scientific publishing purposes and are more likely to maintain scientific rigor and integrity, by, for example, integrating recommended ethical considerations and not using uploaded data for training their AI models (although this needs to be verified for each AI tool before use). In conclusion, researchers should remember that the absolute norms of scholarly publishing are accuracy in content (which AI tools cannot be held accountable for) and transparency in reporting, including when AI has been used, and thus human authors are accountable for all content within their article.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAcademic integrity and plagiarismAcademic Publishing and Open Access
Volltext beim Verlag öffnen