Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An assessment of the capability of ChatGPT in solving clinical cases of ophthalmology using multiple choice and short answer questions
4
Zitationen
3
Autoren
2024
Jahr
Abstract
In healthcare, AI chatbots like ChatGPT hold the potential to assist in medical education and clinical decision-making. This study assesses the performance of AI chatbots, specifically ChatGPT 3.5 and ChatGPT 4, in addressing ophthalmology-related questions from medical examinations, including FMGE Multiple Choice Questions, Higher Order Thinking Multiple Choice Questions, and Clinical Reasoning Short Answer Questions. While both versions of ChatGPT demonstrate strong capabilities in handling medical knowledge, higher-order thinking questions, and clinical reasoning, discrepancies were observed in specific cases. Inaccuracies in responses underscore the need for continuous refinement and validation of AI models in specialized medical fields like ophthalmology. Despite these limitations, AI holds promise in enhancing medical education and supporting clinical decision-making.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.