Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A clinical laboratorian’s journey in developing a machine learning algorithm to assist in testing utilization and stewardship
0
Zitationen
8
Autoren
2023
Jahr
Abstract
Background: Thrombotic thrombocytopenic purpura (TTP) is a rare thrombotic microangiopathy (TMA) and a medical emergency. The ADAMTS13 (AS13) activity assay is needed to confirm the diagnosis. Within the context of limited laboratory resources, a machine learning (ML) was developed to potentially reduce overutilization of AS13 testing and aid clinical colleagues in the future. Methods: A hybrid approach, consisting of both inhouse and literature data, was taken to acquire the data to train and test the decision tree (DT) ML model. The dataset consisted of 104 patients (30 inhouse, 74 literature-derived) with an equal mix of TTP to non-TTP patients (52 each). The features used to develop the supervised DT model were directly acquired from the PLASMIC score. Results: The optimized DT model overall accuracy on the testing dataset was 81%. The sensitivity, specificity, positive and negative predictive were 100%, 69%, 67%, and 100%, respectively. Conclusions: We were able to improve the overall performance of the DT model while maintaining a high NPV. However, this invariably translated to potentially increased false positive results (outcomes classified as TTP are actually non-TTP), our overall goal was not to restrict testing on any potentially true TTP cases. This study was done with limited clinical laboratory resources and continues to be a project in progress. However, as laboratory specialists become more involved in artificial intelligence (AI)/ML initiatives, institutions will need to provide them with a modern information technology (IT) infrastructure with adequate resources to enable these efforts to meet the needs of the future health care system.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.