OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.03.2026, 22:46

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Comparative Evaluation of Deep-Reasoning Large Language Models for Ophthalmic Emergencies

2026·0 Zitationen·Ophthalmology ScienceOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

Purpose: To evaluate contemporary deep-reasoning LLMs for early assessment of ophthalmic emergencies using sequential, workflow-mimicking information levels.Design: Cross-sectional, vignette-based, head-to-head comparative evaluation. Subjects:Thirty-four de-identified emergency ophthalmology teaching cases curated from a publicly accessible repository.Methods: Each case was reconstructed into three sequential information levels (L1: history; L2: basic examination; L3: specialist examination).Six LLMs (Doubao, DeepSeek, Kimi-2, ChatGPT-5, Gemini-3, and Grok-4), operating in deep-reasoning mode, generated outputs that were independently scored by two ophthalmologists.Diagnoses were graded as fully correct, partially correct, or incorrect; triage category (typical vs. atypical emergency) was rated as correct or incorrect.Ancillary test recommendations were mapped to a prespecified 10-category taxonomy and classified as under-testing, exact match, or over-testing.A four-level composite outcome integrated diagnostic correctness, triage accuracy, and testing.Main Outcome Measures: Diagnostic correctness (fully correct, partially correct, incorrect), triage-category accuracy, ancillary test recommendation patterns, and composite outcome (ideal, safe but over-testing, potentially dangerous, intermediate).Results: Across 612 model-case-level outputs, 46.9% of diagnoses were fully correct, 24.5% partially correct, and 28.6% incorrect.Fully correct diagnoses increased from 43.1% at L1 to 53.9% at L3 (p = 0.048).Overall triage-category accuracy was 85.3% (range, 76.5%-94.1% across models; p = 0.003) and did not differ across information levels (p = 0.89).Ancillary test J o u r n a l P r e -p r o o f recommendations most commonly reflected under-testing (51.0%), followed by over-testing (27.5%) and exact matches (21.6%) (p < 0.001 across models).In generalized estimating equation pairwise comparisons, ChatGPT-5 showed higher odds of a fully correct diagnosis than DeepSeek (odds ratio [OR] 3.54, 95% confidence interval [CI] 1.49-8.43)and Gemini-3 (OR 2.24, 95% CI 1.31-3.83),and lower odds of potentially dangerous composite outcomes than DeepSeek (OR 0.28, 95% CI 0.10-0.74)and Gemini-3 (OR 0.31, 95% CI 0.11-0.89). Conclusions:Deep-reasoning LLMs demonstrated high triage-category accuracy and moderate diagnostic performance for ophthalmic emergencies, with diagnostic correctness improving at higher information levels.However, ancillary testing patterns varied substantially, and ideal composite safety profiles were uncommon, supporting cautious, supervised deployment with explicit guardrails governing workup recommendations.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic SkillsRadiology practices and education
Volltext beim Verlag öffnen