Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Clinicians vs. Artificial Intelligence in Patient Outcome Prediction in the Intensive Care Unit
0
Zitationen
5
Autoren
2026
Jahr
Abstract
<title>Abstract</title> IMPORTANCE: Accurate prediction of patient outcomes in the intensive care unit (ICU) is critical for clinical decision-making. While artificial intelligence (AI) has shown potential in retrospective prediction, direct comparisons with human clinicians, particularly in prospective real-world settings, remain unclear. OBJECTIVE: To compare the predictive performances of human clinicians and AI in predicting ICU patient outcomes both retrospectively and prospectively. DESIGN: A mixed retrospective and prospective study conducted to compare clinician and AI performances of ICU patient outcome prediction. SETTING: Fifteen adult ICUs in Alberta, Canada. PARTICIPANTS: Retrospective analysis included 990 ICU admissions randomly selected between February 2012 and December 2019, the patient outcomes of which were collectively predicted by 7 clinicians. Prospective analysis involved 238 ICU admissions from 215 adult patients between September 2020 and December 2022, with a total of 75 clinicians making at least one prediction each. EXPOSURES: Retrospective clinician predictions were made based on patient data. Prospective clinician predictions were collected during active patient care. AI models were trained on retrospective data from 46,631 ICU admissions of 41,096 unique patients to predict the outcomes of the same patients the clinicians predicted in the retrospective and prospective settings. MAIN OUTCOMES AND MEASURES: Primary patient outcomes were in-hospital mortality, 30-day post-discharge mortality, ICU length of stay (LOS), and hospital LOS. Secondary outcomes included the occurrences of delirium and acute kidney injury during the ICU stay. Classification and regression performance metrics of AI and clinicians were compared using Wilcoxon rank-sum tests. Inter-rater agreements amongst clinicians and between clinicians and AI were analyzed with Cohen’s kappa and the intraclass correlation coefficient. RESULTS: In the retrospective setting, AI generally outperformed clinicians but aggregated predictions from seven clinicians outperformed AI. In the prospective setting, subspecialized physicians generally outperformed AI, whereas physicians in training and nurses generally underperformed AI. Both clinicians and AI performed poorly in LOS prediction. Inter-rater agreement was poor or fair for both amongst clinicians and between clinicians and AI. CONCLUSIONS AND RELEVANCE: This study provides a comprehensive evaluation of clinician and AI performances in ICU outcome prediction under both retrospective and real-world prospective conditions, setting important prediction performance benchmarks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.