Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI
8
Zitationen
5
Autoren
2019
Jahr
Abstract
The American College of Radiology (ACR) has guidelines on appropriate ordering of Magnetic Resonance Imaging (MRI) brain scans. MRI requests are currently manually reviewed by radiologists to ensure compliance to these guidelines. In this paper, we implemented a stacked recurrent neural network (RNN) utilizing a bidirectional long short-term memory (Bi-LSTM) sequence with BioWordVec, a biomedical word embedding vector that uses word representations from a lexicon developed from medical publications, to develop an automated classification system for request audit. To overcome the problems of interpretation by black-box models, the RNN is integrated with a model agnostic explainer LIME (Local Interpretable Model-Agnostic Explanations) to provide explainable support for clinicians in the healthcare environment. The performance of the RNN is compared with a Random Forest (RF) algorithm that utilizes the bag-of-words concept. The RNN was trained and validated on 2470 rows of different patient free-text orders and tested on a separate 2711 orders, producing an accuracy of 82.51% and a ROC value of 0.89 which was either comparable to or surpassing RF both in performance and usability. The use of deep learning with explainable LIME in this study provided a good use case towards an augmented decision-making framework in healthcare.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.156 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.543 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Analysis of Survival Data.
1985 · 4.379 Zit.