Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
In Reply: I Asked a ChatGPT to Write an Editorial About How We Can Incorporate Chatbots Into Neurosurgical Research and Patient Care…
2
Zitationen
4
Autoren
2023
Jahr
Abstract
To the Editor: Kleebayoon and Wiwanitkit1,2 offer a thoughtful and important perspective regarding the use of ChatGPT in neurosurgical care, namely, concerns regarding the accuracy of data produced and issues surrounding malpractice and plagiarism. They elaborate that using computational tools to generate primary material is morally unsound and unethical, and careful examination of a code of conduct for using large language models is necessary to prevent misuse. We agree and emphasize that transparency around ChatGPT-generated content is of paramount importance in maintaining the ethical pillars of nonmaleficence, beneficence, social justice, and autonomy central to medical ethics. Everyone involved in generating and consuming ChatGPT-generated content, including patients, clinicians, and researchers, should be aware of the sources of their information to make informed decisions regarding how to use the information presented to them. Furthermore, validation of the accuracy of generated content will be central to incorporating these technologies into medical care. This is not a novel consideration regarding concerns about misinformation in health care and should be appropriately extended to ChatGPT-generated content as well. Judicious and cautious utilization is necessary. Just as clinicians, educators, and researchers carefully assess third-party and online information before implementing findings into clinical and academic practice, so too should they carefully scrutinize ChatGPT content. Because ChatGPT and other large language models are implemented into health care at a rapid pace, Kleebayoon and Wiwanitkit prudently caution appropriately formed codes of conduct regarding the integration of ChatGPT into clinical and academic practice. ChatGPT's potential for misuse is obvious, and we agree that policies regarding its use are necessary, ensuring that the scope of its utility is appropriate and safe. That said, we believe that there are safe ways for clinicians to incorporate ChatGPT technologies into modern medical practice in a way that improves clinical and academic efficiencies, data collection and interpretation, and ultimately patient outcomes through its incredible computational abilities. The future is not the replacement of the human contribution to medicine, but a careful collaborative effort with these technologies to advance our field.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.