OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 06:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Risk and potential of ChatGPT in scientific publishing

2023·8 Zitationen·Journal of Inherited Metabolic DiseaseOpen Access
Volltext beim Verlag öffnen

8

Zitationen

7

Autoren

2023

Jahr

Abstract

Chat generative pre-trained transformer (ChatGPT) is an advanced large language model (LLM) that uses deep learning techniques. The chatbot was first released in November 2022 by OpenAI, an artificial intelligence company based in San Francisco, and is currently one of the largest publicly available LLMs.1 Numerous potential applications of ChatGPT or other LLMs in science and medicine include identification of research topics and assisting health care professionals in clinical and laboratory diagnosis. In the field of medical science, where accuracy and credibility are essential, it is necessary to carefully weigh the risks and benefits associated with using LLMs in scientific publishing.2 If you ask ChatGPT (Version 3.5, on July 17, 2023) to summarise the "risk and potential of ChatGPT in medical scientific publications" in two to three sentences, the answer is: "Using ChatGPT in the publication of medical science poses risks such as potential inaccuracies, limited context understanding, lack of domain-specific knowledge, and ethical concerns regarding authorship. However, it also offers chances for assistance in literature review, idea generation, collaborative writing, and facilitating the dissemination of knowledge to a broader audience. Researchers should exercise caution, verify information independently, and consult domain experts to mitigate risks and maximise the benefits of using ChatGPT in medical science publication." This summary sums up quite nicely the risks and potential of using ChatGPT in scientific publishing. It is becoming clear that we need clear rules for dealing with this fairly new development. The advantages for authors are quite obvious. LLMs can support the writing process by providing suggestions for sentence structure, grammar, and style and thus help improve the clarity and readability of scientific manuscripts. This may be particularly helpful to non-native speakers of English, potentially democratising scientific writing. Researchers could also use it to explore different perspectives, design experimental plans, or consider alternative approaches to a problem. However, the associated risks need to be clear to every author. ChatGPT generates responses based on patterns it has learned from training data, but it may provide factually inaccurate or misleading information and answers may not be evidence-based. LLMs do not have real-time context awareness, which means they may not fully understand the specific context, potentially leading to misinterpretation or incorrect conclusions. The training data for ChatGPT are not publicly available, leading to a lack of transparency. Further, the training dataset is limited to information available in 2021,1 so ChatGPT is not up to date on the latest research. Finally, a significant limitation is ChatGPT's inability to cite its sources; if researchers use LLMs to produce content for publication without proper acknowledgement or transparency, this raises ethical issues related to plagiarism, authorship, and scientific integrity. What is the correct way to use this new application in scientific writing? Journal editors, researchers, and publishers are now debating the place of such tools in the published literature, and whether it is appropriate to cite the bot as an author. Some preprint servers allow inclusion of ChatGPT as a co-author,3 but this has been rejected by the editors-in-chief of Nature and Science since ChatGPT cannot bear responsibility for the content and authenticity of scientific studies and thus does not meet the criteria for a study author.4 This rule has been correspondingly applied by most scientific journals and is also the position of the Editorial Team of the JIMD. Nevertheless, transparency regarding the utilisation of ChatGPT or other LLMs in a study or manuscript, including a clear indication of its use in the materials and methods section, is currently mandatory for all scientific journals. As we navigate the uncharted waters of incorporating ChatGPT or other LLMs into publications in medical science, we must proceed cautiously, fully aware of both the risks and opportunities of their integration into scientific writing. The potential for inaccuracies and the ethical implications of their implementation cannot be ignored. The challenge will be to embrace this evolution and, by combining the strengths of human expertise with their technical capabilities, maximise the overall potential while maintaining the highest standards of accuracy, transparency, and integrity. For now, further scrutiny of AI generated text in scientific writing is needed. We hope the readership of the Journal of Inherited Metabolic Disease will consider these issues and contribute to the debate.

Ähnliche Arbeiten