OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 28.04.2026, 18:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Comparative performance of large language models in answering cornea and cataract surgery questions for resident training

2026·0 Zitationen·BMC OphthalmologyOpen Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2026

Jahr

Abstract

The application of large language models (LLMs) in the medical field has gained increasing popularity; however, their effectiveness in ophthalmology remains uncertain. This study aimed to evaluate the accuracy of responses generated by various deep learning-based LLMs to questions on cataract and corneal diseases and surgeries, and to verify their educational effectiveness by comparing the performances of LLMs with those provided by ophthalmology fellows and residents. Eighty-one multiple-choice questions on corneal diseases and cataract surgeries were developed based on the standard format of the Korean ophthalmology board examination and categorized into three subtypes: recall-type (n = 27), interpretation-type (n = 27), and problem-solving-type (n = 27). The accuracy and appropriateness of commonly used LLMs (ChatGPT-4o, ChatGPT-5, Gemini 3.0 Pro, and Claude Sonnet 4.5) were evaluated and compared with one another, and against the performances of three ophthalmology residents and three corneal fellows. Among the four LLMs, ChatGPT-5 demonstrated the highest overall accuracy (75/81; 92.59%), followed by Gemini 3.0 Pro (73/81; 90.12%) and Claude Sonnet 4.5 (73/81; 90.12%), all of which outperformed ChatGPT-4o (70/81; 86.42%) as well as ophthalmology fellows (86.42 ± 1.23%) and residents (82.30 ± 8.22%). For recall-type questions, ChatGPT-5 achieved the highest accuracy (92.59%), and the other three LLMs (85.19%) outperformed both fellows (82.72 ± 4.28%) and residents (71.60 ± 17.50%). In interpretation-type questions, ChatGPT-5, Gemini 3.0 Pro, and Claude Sonnet 4.5 achieved perfect scores (100%), while ChatGPT-4o (92.59%) performed comparably to fellows (92.59 ± 7.41%) and better than residents (87.65 ± 4.28%). However, in problem-solving-type questions, all LLMs scored relatively lower (ChatGPT-5: 85.19%; others: 81.48%) than the mean performance of fellows (87.65 ± 11.91%). Among the LLMs, ChatGPT-5 demonstrated the highest overall performance; however, no statistically significant differences were observed between the models. Compared with ophthalmology trainees, ChatGPT-5, Gemini 3.0 Pro, and Claude Sonnet 4.5 achieved significantly higher overall scores than the mean trainee performance. Although LLMs showed good performance in recall- and interpretation-type questions, their relatively lower accuracy in problem-solving-type questions suggests that further advances in LLMs are necessary for their reliable use as educational tools in ophthalmology.

Ähnliche Arbeiten