OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 17:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Assessing clinical acuity in the Emergency Department using the GPT-3.5 Artificial Intelligence Model

2023·7 ZitationenOpen Access
Volltext beim Verlag öffnen

7

Zitationen

6

Autoren

2023

Jahr

Abstract

Abstract This paper evaluates the performance of the Chat Generative Pre-trained Transformer (ChatGPT; GPT-3.5) in accurately identifying higher acuity patients in a real-world clinical context. Using a dataset of 10,000 pairs of patient Emergency Department (ED) visits with varying acuity levels, we demonstrate that GPT-3.5 can successfully determine the patient with higher acuity based on clinical history sections extracted from ED physician notes. The model achieves an accuracy of 84% and an F1 score of 0.83, with improved performance for more disparate acuity scores. Among the 500 pair subsample that was also manually classified by a resident physician, GPT-3.5 achieved similar performance (Accuracy = 0.84; F1 score = 0.85) compared to the physician (Accuracy = 0.86, F1 score = 0.87). Our results suggest that, in real-world settings, GPT-3.5 can perform comparably to physicians on the clinical reasoning task of ED acuity determination.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationClinical Reasoning and Diagnostic SkillsMachine Learning in Healthcare
Volltext beim Verlag öffnen