Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Machine learning-based diagnosis for disseminated intravascular coagulation (DIC): Development, external validation, and comparison to scoring systems
20
Zitationen
11
Autoren
2018
Jahr
Abstract
The major challenge in the diagnosis of disseminated intravascular coagulation (DIC) comes from the lack of specific biomarkers, leading to developing composite scoring systems. DIC scores are simple and rapidly applicable. However, optimal fibrin-related markers and their cut-off values remain to be defined, requiring optimization for use. The aim of this study is to optimize the use of DIC-related parameters through machine learning (ML)-approach. Further, we evaluated whether this approach could provide a diagnostic value in DIC diagnosis. For this, 46 DIC-related parameters were investigated for both clinical findings and laboratory results. We retrospectively reviewed 656 DIC-suspected cases at an initial order for full DIC profile and labeled their evaluation results (Set 1; DIC, n = 228; non-DIC, n = 428). Several ML algorithms were tested, and an artificial neural network (ANN) model was established via independent training and testing using 32 selected parameters. This model was externally validated from a different hospital with 217 DIC-suspected cases (Set 2; DIC, n = 80; non-DIC, n = 137). The ANN model represented higher AUC values than the three scoring systems in both set 1 (ANN 0.981; ISTH 0.945; JMHW 0.943; and JAAM 0.928) and set 2 (AUC ANN 0.968; ISTH 0.946). Additionally, the relative importance of the 32 parameters was evaluated. Most parameters had contextual importance, however, their importance in ML-approach was different from the traditional scoring system. Our study demonstrates that ML could optimize the use of clinical parameters with robustness for DIC diagnosis. We believe that this approach could play a supportive role in physicians' medical decision by integrated into electrical health record system. Further prospective validation is required to assess the clinical consequence of ML-approach and their clinical benefit.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.