Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Machine Learning Versus Usual Care for Diagnostic and Prognostic Prediction in the Emergency Department: A Systematic Review
60
Zitationen
5
Autoren
2020
Jahr
Abstract
OBJECTIVE: Having shown promise in other medical fields, we sought to determine whether machine learning (ML) models perform better than usual care in diagnostic and prognostic prediction for emergency department (ED) patients. METHODS: In this systematic review, we searched MEDLINE, Embase, Central, and CINAHL from inception to October 17, 2019. We included studies comparing diagnostic and prognostic prediction of ED patients by ML models to usual care methods (triage-based scores, clinical prediction tools, clinician judgment) using predictor variables readily available to ED clinicians. We extracted commonly reported performance metrics of model discrimination and classification. We used the PROBAST tool for risk of bias assessment (PROSPERO registration: CRD42020158129). RESULTS: The search yielded 1,656 unique records, of which 23 studies involving 16,274,647 patients were included. In all seven diagnostic studies, ML models outperformed usual care in all performance metrics. In six studies assessing in-hospital mortality, the best-performing ML models had better discrimination (area under the receiver operating characteristic curve [AUROC] =0.74-0.94) than any clinical decision tool (AUROC =0.68-0.81). In four studies assessing hospitalization, ML models had better discrimination (AUROC =0.80-0.83) than triage-based scores (AUROC =0.68-0.82). Clinical heterogeneity precluded meta-analysis. Most studies had high risk of bias due to lack of external validation, low event rates, and insufficient reporting of calibration. CONCLUSIONS: Our review suggests that ML may have better prediction performance than usual care for ED patients with a variety of clinical presentations and outcomes. However, prediction model reporting guidelines should be followed to provide clinically applicable data. Interventional trials are needed to assess the impact of ML models on patient-centered outcomes.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.611 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.504 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.025 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.