Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
COVID-Net Clinical ICU: Enhanced Prediction of ICU Admission for\n COVID-19 Patients via Explainability and Trust Quantification
2
Zitationen
4
Autoren
2021
Jahr
Abstract
The COVID-19 pandemic continues to have a devastating global impact, and has\nplaced a tremendous burden on struggling healthcare systems around the world.\nGiven the limited resources, accurate patient triaging and care planning is\ncritical in the fight against COVID-19, and one crucial task within care\nplanning is determining if a patient should be admitted to a hospital's\nintensive care unit (ICU). Motivated by the need for transparent and\ntrustworthy ICU admission clinical decision support, we introduce COVID-Net\nClinical ICU, a neural network for ICU admission prediction based on patient\nclinical data. Driven by a transparent, trust-centric methodology, the proposed\nCOVID-Net Clinical ICU was built using a clinical dataset from Hospital\nSirio-Libanes comprising of 1,925 COVID-19 patient records, and is able to\npredict when a COVID-19 positive patient would require ICU admission with an\naccuracy of 96.9% to facilitate better care planning for hospitals amidst the\non-going pandemic. We conducted system-level insight discovery using a\nquantitative explainability strategy to study the decision-making impact of\ndifferent clinical features and gain actionable insights for enhancing\npredictive performance. We further leveraged a suite of trust quantification\nmetrics to gain deeper insights into the trustworthiness of COVID-Net Clinical\nICU. By digging deeper into when and why clinical predictive models makes\ncertain decisions, we can uncover key factors in decision making for critical\nclinical decision support tasks such as ICU admission prediction and identify\nthe situations under which clinical predictive models can be trusted for\ngreater accountability.\n
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.384 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.719 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.434 Zit.