Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Is ChatGPT a Useful Tool for Ophthalmology Practice?
0
Zitationen
2
Autoren
2024
Jahr
Abstract
Aim: This study aimed to assess ChatGPT-3.5's performance in ophthalmology, comparing its responses to clinical case-based and multiple-choice (MCQ) questions. Methods: ChatGPT-3.5, an AI model developed by OpenAI, was employed. It responded to 98 case-based questions from "Ophthalmology Review: A Case-Study Approach" and 643 MCQs from "Review Questions in Ophthalmology" book. ChatGPT's answers were compared to the books, and statistical analysis was conducted. Results: ChatGPT achieved an overall accuracy of 56.1% in case-based questions. Accuracy varied across categories, with the highest in the retina section (69.5%) and the lowest in the trauma section (38.2%). In MCQ, ChatGPT's accuracy was 53.5%, with the weakest in the optics section (32.6%) and the highest in pathology and uveitis (66.7% and 63.0%, respectively). ChatGPT performed better in case-based questions in the retina and pediatric ophthalmology sections than MCQ. Conclusion: ChatGPT-3.5 exhibits potential as a tool in ophthalmology, particularly in retina and pediatric ophthalmology. Further research is needed to evaluate ChatGPT's clarity and acceptability for open-ended questions.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.