Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
<i>Editorial Commentary:</i> At Present, ChatGPT Cannot Be Relied Upon to Answer Patient Questions and Requires Physician Expertise to Interpret Answers for Patients
3
Zitationen
3
Autoren
2024
Jahr
Abstract
ChatGPT is designed to provide accurate and reliable information to the best of its abilities based on the data input and knowledge available. Thus, ChatGPT is being studied as a patient information tool. This artificial intelligence (AI) tool has been shown to frequently provide technically correct information but with limitations. ChatGPT provides different answers to similar questions based on the prompts, and patients may not have expertise in prompting ChatGPT to elicit a best answer. (Prompting large language models has been shown to be a skill that can improve.) Of greater concern, ChatGPT fails to provide sources or references for its answers. At present, ChatGPT cannot be relied upon to address patient questions; in the future, ChatGPT will improve. Today, AI requires physician expertise to interpret AI answers for patients.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.469 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.358 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.803 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.542 Zit.