Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reply to “Comment on ChatGPT failed Taiwan’s Family Medicine Board Exam”
5
Zitationen
2
Autoren
2023
Jahr
Abstract
DEAR EDITOR We thank Mungmunpuntipantip and Wiwanitkit1 for their valuable comments on our study named “ChatGPT failed Taiwan’s Family Medicine Board Exam” published in the Journal of the Chinese Medical Association.2 The author comments are discussed below. We fully agree with the importance of the ethical issue of Chat Generative Pre-trained Transformer’s (ChatGPT’s) drafting, editing, or approving sensitive information without human supervision. In our study, we use only the publicly available exam items to evaluate ChatGPT’s performance in answering questions. Although the result was far from satisfactory at the current stage, we believe that ChatGPT and other related products can be further developed to successfully serve as good teaching tools, either to verify the difficulty level of tests ex ante or to generate a database of questions for selection.3 Of course, the utility of ChatGPT can be extended to other areas of clinical practice, from medical documentation to decision support,4 in addition to academic research.5 Mungmunpuntipantip and Wiwanitkit1 also cautioned against exposing private data to ChatGPT. Many people and companies share the same concerns, and information technology enterprises see business opportunities. For example, Microsoft Corporation (Redmond, WA, USA) announced in March 2023 that it will release a privacy-focused version of ChatGPT running on dedicated cloud servers.6 In the foreseeable future, healthcare facilities may seek to deploy the in-house servers incorporating the large language models, that is, to construct domain- or specialty-specific chatbots trained on in-house data to answer questions relevant to the daily healthcare services. This is just the beginning. Artificial intelligence with natural language processing will accelerate the transformation of workflows. Every effort should be made to fully understand the applications, benefits, limitations, and risks of the new technology.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.