Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Is Artificial Intelligence Ready for Emergency Department Triage? A Retrospective Evaluation of Multiple Large Language Models in 39,375 Patients at a University Emergency Department
0
Zitationen
12
Autoren
2026
Jahr
Abstract
<b>Background:</b> Large language models (LLMs) are increasingly proposed as clinical decision support tools. However, their reliability in the emergency department (ED) triage remains insufficiently validated. This study aimed to evaluate the performance and limitations of multiple LLMs in triage using a large retrospective dataset. <b>Methods:</b> We conducted a retrospective analysis of 39,375 anonymized patient cases from the ED of AHEPA University General Hospital, Thessaloniki, Greece (June 2024-July 2025), extracted from the hospital's electronic medical record system. All cases were triaged in real time according to the Emergency Severity Index (ESI) by 25 emergency physicians. In cases of uncertainty, a senior emergency physician was consulted. Seven LLMs (ChatGPT-5 Thinking, ChatGPT-5 Instant, Gemini 2.5, Qwen 3, Grok 4.0, Deep Seek v3.1, and Claude Sonnet 4) were evaluated against the physician-assigned ESI level (reference standard). Outcomes included triage score agreement (quadratic weighted kappa, κw), clinic referral accuracy and admission prediction. Subgroup analyses were performed by referral clinic and admission outcome. The study was conducted in accordance with TRIPOD-AI reporting guidelines. <b>Results:</b> Model performance varied substantially. DeepSeek and Claude Sonnet 4 achieved the highest agreement with physician-assigned ESI (κw ≈ 0.467; raw accuracy: 61.7%). In contrast, GPT-5 Instant performed poorly across all evaluation metrics (κw = 0.176; 95% CI: 0.167-0.186). Claude Sonnet 4 demonstrated the best performance in clinic referral (67.1%; κ = 0.619) and admission prediction (κw ≈ 0.46). Subgroup analyses indicated higher performance in pediatric cases and organ-specific complaints, such as ophthalmology (up to 81% accuracy). LLMs also showed tendencies toward over- or under-triage. <b>Conclusions:</b> Current LLMs demonstrate promising but inconsistent capability in triage. While selected models achieved moderate alignment with physician ESI decisions, none achieved strong agreement (κ > 0.80). LLMs are most suitable as supervised decision support tools, particularly in anatomically well-defined clinical scenarios, rather than as autonomous systems.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.