Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT-4 and ethical responsibilities in publication
0
Zitationen
4
Autoren
2024
Jahr
Abstract
We read with great interest the latest article published by Wen et al.1, regarding the capacities of ChatGPT in academic research and publishing. In November 2022, in San Francisco, California, OpenAI developed the ChatGPT language generation model. This is an artificial intelligence (AI)-powered technology that produces high-quality writing. There is widespread interest in the use of ChatGPT in medicine, including in patient care and scientific content documentation. In this regard, Gupta et al.2 demonstrated ChatGPT’s ability to generate unique systematic review concepts in the domain of plastic surgery. In a comparable context, Weidman et al.3 evaluated ChatGPT’s ability to generate high-quality, reproducible, and plagiarism-free poetry in plastic surgery. ChatGPT was capable of producing acceptable, credible responses to diverse human inquiries. There is evidence of successful ChatGPT training in passing several academic examinations, such as a third-year medical student’s National Board of Medical Examiners examinations and the United States Medical Licensing Examination Step with acceptable scores4. As per the article’s following level, O’Connor5 was named as a co-author on the ChatGPT. However, there are differing opinions on whether ChatGPT qualifies as a co-author in scientific publications. Writing a scientific manuscript that contains creativity, innovation, and correctness by ChatGPT remains contradictory. Despite the fact that ChatGPT may generate a large number of plausible-sounding responses, ChatGPT provides an unlimited response demand for taught and reinforced learning. AI-based technology may generate inaccurate and misleading data. Furthermore, ChatGPT is confined to the languages trained to create responses, so the results would not be essentially complete. ChatGPT also fails to distinguish new findings from previous or future evidence. ChatGPT appears to require double-checking by a second human supervisor in areas like plagiarism, verifiability, unsuitable references, and even language editing in non-native English authors. It is also vital to note that there are significant concerns about legal regulations when using ChatGPT to write academic reports. From an ethical standpoint, chatbot-generated content, such as ChatGPT, carries an intrinsic risk of academic dishonesty due to bias in training. The lack of publishing ethics and legal liabilities in the application of ChatGPT highlighted my human duty as a senior reviewer to test the accuracy and reproducibility of the content generated by ChatGPT. Weidman et al.3. proposed that ChatGPT answers may be sensitive to breaches or misuse due to bias in the teaching process by ChatGPT users. However, according to the International Committee of Medical Journal Editors (ICMJE) criteria6, ChatGPT does not qualify as a co-author of academic medical publications. In conclusion, the influence of ChatGPT on scientific publishing is unknown. Current evidence has emphasized the potential benefits of ChatGPT in the disciplines of medicine, which include practical decision-making as well as document production. However, due to ethical issues, there are some serious limitations to the deployment of this technology. In this situation, we hypothesized that ChatGPT’s scientific contents require supervision by a blind third senior reviewer who carefully examines and edits the data produced by ChatGPT before using it for scientific publication. Ethics approval Ethics approval was not required for this correspondence. Consent Informed consent was not required for this correspondence. Sources of funding This work did not receive financial support. Author contribution M.A., F.D., and F.P.: writing and editing the draft; M.K.: study design, data collection, and writing and editing the draft. All authors read and approved the final version of the manuscript. Conflicts of interest disclosure There are no conflicts of interest. Research registration unique identifying number (UIN) Not applicable. Guarantor All the authors of this paper accept full responsibility for the work and/or the conduct of the study, had access to the data, and controlled the decision to publish. Provenance and peer review Not commissioned, externally peer-reviewed. Data availability statement Not applicable.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.357 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.221 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.640 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.482 Zit.