Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Risk and potential of ChatGPT in scientific publishing
8
Zitationen
7
Autoren
2023
Jahr
Abstract
Chat generative pre-trained transformer (ChatGPT) is an advanced large language model (LLM) that uses deep learning techniques. The chatbot was first released in November 2022 by OpenAI, an artificial intelligence company based in San Francisco, and is currently one of the largest publicly available LLMs.1 Numerous potential applications of ChatGPT or other LLMs in science and medicine include identification of research topics and assisting health care professionals in clinical and laboratory diagnosis. In the field of medical science, where accuracy and credibility are essential, it is necessary to carefully weigh the risks and benefits associated with using LLMs in scientific publishing.2 If you ask ChatGPT (Version 3.5, on July 17, 2023) to summarise the "risk and potential of ChatGPT in medical scientific publications" in two to three sentences, the answer is: "Using ChatGPT in the publication of medical science poses risks such as potential inaccuracies, limited context understanding, lack of domain-specific knowledge, and ethical concerns regarding authorship. However, it also offers chances for assistance in literature review, idea generation, collaborative writing, and facilitating the dissemination of knowledge to a broader audience. Researchers should exercise caution, verify information independently, and consult domain experts to mitigate risks and maximise the benefits of using ChatGPT in medical science publication." This summary sums up quite nicely the risks and potential of using ChatGPT in scientific publishing. It is becoming clear that we need clear rules for dealing with this fairly new development. The advantages for authors are quite obvious. LLMs can support the writing process by providing suggestions for sentence structure, grammar, and style and thus help improve the clarity and readability of scientific manuscripts. This may be particularly helpful to non-native speakers of English, potentially democratising scientific writing. Researchers could also use it to explore different perspectives, design experimental plans, or consider alternative approaches to a problem. However, the associated risks need to be clear to every author. ChatGPT generates responses based on patterns it has learned from training data, but it may provide factually inaccurate or misleading information and answers may not be evidence-based. LLMs do not have real-time context awareness, which means they may not fully understand the specific context, potentially leading to misinterpretation or incorrect conclusions. The training data for ChatGPT are not publicly available, leading to a lack of transparency. Further, the training dataset is limited to information available in 2021,1 so ChatGPT is not up to date on the latest research. Finally, a significant limitation is ChatGPT's inability to cite its sources; if researchers use LLMs to produce content for publication without proper acknowledgement or transparency, this raises ethical issues related to plagiarism, authorship, and scientific integrity. What is the correct way to use this new application in scientific writing? Journal editors, researchers, and publishers are now debating the place of such tools in the published literature, and whether it is appropriate to cite the bot as an author. Some preprint servers allow inclusion of ChatGPT as a co-author,3 but this has been rejected by the editors-in-chief of Nature and Science since ChatGPT cannot bear responsibility for the content and authenticity of scientific studies and thus does not meet the criteria for a study author.4 This rule has been correspondingly applied by most scientific journals and is also the position of the Editorial Team of the JIMD. Nevertheless, transparency regarding the utilisation of ChatGPT or other LLMs in a study or manuscript, including a clear indication of its use in the materials and methods section, is currently mandatory for all scientific journals. As we navigate the uncharted waters of incorporating ChatGPT or other LLMs into publications in medical science, we must proceed cautiously, fully aware of both the risks and opportunities of their integration into scientific writing. The potential for inaccuracies and the ethical implications of their implementation cannot be ignored. The challenge will be to embrace this evolution and, by combining the strengths of human expertise with their technical capabilities, maximise the overall potential while maintaining the highest standards of accuracy, transparency, and integrity. For now, further scrutiny of AI generated text in scientific writing is needed. We hope the readership of the Journal of Inherited Metabolic Disease will consider these issues and contribute to the debate.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.
Autoren
Institutionen
- Heidelberg University(DE)
- University Hospital Heidelberg(DE)
- University of Zurich(CH)
- University Children's Hospital Zurich(CH)
- Mayo Clinic(US)
- Mayo Clinic in Florida(US)
- Innsbruck Medical University(AT)
- Great Ormond Street Hospital for Children NHS Foundation Trust(GB)
- Wellcome Centre for Mitochondrial Research(GB)
- University College London(GB)