OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.03.2026, 02:16

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Regulations of <scp>ChatGPT</scp> use in paper writing: Based on beliefs or practical inevitability?

2024·2 Zitationen·Australian and New Zealand Journal of Obstetrics and GynaecologyOpen Access
Volltext beim Verlag öffnen

2

Zitationen

2

Autoren

2024

Jahr

Abstract

Most journals follow the International Committee of Medical Journal Editors' (ICMJE) recommendations on using generative AI like ChatGPT in academic writing.1 Authors must verify ChatGPT's output and declare its use. Such practices were routine even before ChatGPT; when someone polished a manuscript, it required checking and declaration. The key issue is the extent to which generative AI can be used. Here, for simplicity, ‘ChatGPT’ refers to generative AI in general. Some journals (or publishers) impose strict regulations, permitting ChatGPT solely for ‘improving readability and language’,2 while others are more lenient, requiring only a usage declaration. The Australian and New Zealand Journal of Obstetrics and Gynaecology (ANZJOG) appears to belong to the latter category, as its hyperlink (Wiley Guideline) does not clearly define the permissible extent of AI use.3 Such variation among journals may cause confusion. First, ‘acceptance at the first journal’ is not guaranteed. Among the first author's 560+ PubMed-indexed papers, one was accepted only by the ninth journal. If a ChatGPT-reliant paper is rejected by a lenient journal, it may then be submitted to a stricter one, risking non-adherence to the latter's regulations. Second, whether one can ‘fully’ declare ChatGPT use is unclear. For example, ANZJOG (Wiley Guideline) states ‘ChatGPT use must be described in detail’ and that ‘The final decision about whether its use is permissible lies with the journal's editor’.3 If one relies heavily on ChatGPT (eg ‘writing based on inputted ideas’) and fully declares it, how the journal responds remains uncertain, making full disclosure unlikely. Journals seem to struggle to make regulations due to uncertainty about ChatGPT's long-term effects. No data shows that it harms human writing or thinking, but since writing is fundamental human behaviour, reliance on ChatGPT may impact these abilities. Furthermore, just as smartphone light disrupts sleep,4 ChatGPT could have more serious effects. ChatGPT is now easily accessible and some may find it difficult to avoid heavily relying on it in writing. Strict or lenient, current regulations seem based on the inevitability of ChatGPT use rather than proven safety. Reality, rather than belief, shaped current regulations. Humans have erred before, ignoring future consequences like energy use and climate change. Once loosened, regulations are hard to tighten. Taken in all, regulations should proceed with the assumption that ChatGPT use may have unintended effects. We should err on the side of caution. Thus, we hope that journals will adopt the following stance and convey this message: ‘As the long-term effects of AI are unknown, ChatGPT should only be used as a linguistic checker in the final stages until further evidence is available’.5 Once safety is established, loosening regulations is easy. We do not intend to criticise any journals (including ANZJOG) or publishers, nor do we demand immediate changes to author guidelines. Our aim is simply to provide a platform for readers, journals, and publishers to discuss this issue. Since ChatGPT detectors are incomplete,6 some will use ChatGPT secretly, creating a ‘ChatGPT divide’—similar to the digital divide—fostering inequity. How to handle this is another issue. SM and DM identified the significance, and wrote and edited the manuscript.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen