Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explaining Therapy Predictions with Layer-Wise Relevance Propagation in Neural Networks
83
Zitationen
4
Autoren
2018
Jahr
Abstract
In typical data analysis projects in biology and healthcare, simpler predictive models, such as regressions and decision trees, enjoy more popularity than more complex and expressive ones, such as neural networks. One reason for this is that the functioning of simpler models is easier to explain, which greatly increases user acceptance. A neural network, on the contrary, is often regarded as a black box model, because its very strength in modeling complex interactions also makes its operation almost impossible to explain. Still, neural networks remain very interesting tools, since they have demonstrated promising performance in a variety of predictive tasks, such as medical image classification and segmentation, as well as clinical event prediction, i.e., in the modeling of therapy decisions and survival time. In this work, we attempt to improve the explainability of neural networks applied in healthcare. We propose to apply the Layer-wise Relevance Propagation algorithm to explain clinical decisions proposed by deep modern neural networks. This algorithm is able to highlight the features that lead to the probabilistic prediction of therapy decisions for each individual patient. We evaluate the feature-oriented explanations generated by the algorithm with clinical experts. We show that the features, which are identified by the algorithm to be relevant, largely agree with clinical knowledge and guidelines. We believe that being able to explain machine learning based decisions greatly improves transparency and acceptance of neural network models applied in the clinical domain.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.