Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT4.o Geriatrics Knowledge Competency and Its Evaluation by Geriatricians
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Abstract ChatGPT has passed USMLE and other medical knowledge exams, demonstrating competence in the medical field. It is less studied in Geriatric Medicine specifically. This study aimed to evaluate the geriatric competency of ChatGPT4.o by examining its performance on the validated UCLA Geriatrics knowledge tests, comparing its performance with that of trainees and exploring whether geriatricians agree with ChatGPT 4.o’s performance. 18 UCLA Geriatrics knowledge questions were answered. “Correct answer” was graded as 1, “incorrect answer” as -1 and “don’t know” as 0. The total score was between -18 to + 18. Test scores were calculated to compare ChatGPT and trainees (medical students, internal medicine residents and Geriatric medicine fellows) from previously published studies. ChatGPT4.o responses were evaluated by participants and graded on a Likert scale of 1-5 (1=strongly disagree, 5=strongly agree). Score of Geriatric knowledge by ChatGPT 4.0 was 18, higher than all trainees (9.9, 9.5, 13.6, 13,2, 14.7, 14.9. and 17.5 for MS1, MS2, MS3, PGY 1, 2 and 3 IM residents, and Geriatrics fellows respectively). Six geriatricians (four faculty and two fellows) rated its performance at 4.2 on a 1-5 Likert scale. In conclusion, ChatGPT4.o outperformed the trainees in Geriatric knowledge tests. Geriatricians were in agreement with ChatGPT4.o performance. It suggests ChatGPT4.o has competency in Geriatric knowledge. Concurrent control trainees’ testing is underway, as their scores may differ than those of the historical controls’ scores due to the trainees’ own use of artificial intelligence.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.