Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Is ChatGPT's Knowledge on Rhinology Accurate? Can It Be Utilized in Medical Education and Patient Information?
0
Zitationen
4
Autoren
2026
Jahr
Abstract
ABSTRACT Objective: The ChatGPT is a new artificial intelligence model designed to create human-like chats. As a result of advancing knowledge and technological improvements, it is promising in the field of medicine, especially as a resource for patients and clinicians. The aim of our study was to measure the accuracy and consistency of ChatGPT answers to questions in the field of rhinology.Methods: In March 2024, ChatGPT (ChatGPT version 4) was presented with 130 rhinology questions. Each question was asked to ChatGPT twice, and the consistency and reproducibility of the answers were investigated. The answers were evaluated by three ENT physicians. Results:The answers given by the ChatGPT were consistent at a rate of 91.5%(119/130). Among the inconsistent answers, the second answer was found to be more correct in out of 10/11. Statistically, the second answer was more correct (p:0011). In 130 questions, as a result of the controller evaluation, the number of answers evaluated as completely correct was 99/81/80(76.2%, 62.3%, and 61.5%, respectively). However, completely incorrect answers were observed in 5.4%, 4.6%, and 5.4%, respectively. Accordingly, there was no statistically significant difference between the controllers(p:0.270).Conclusion:The inaccuracy of ChatGPT in patient information and education processes is considered acceptable and reliable. However, it is also seen that ChatGPT answers are not completely correct and can provide misleading answers to some questions. We believe that it would be safer and more accurate to use ChatGPT as an informative and educational tool for patients with the control of experts.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.357 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.221 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.640 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.482 Zit.