Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Are Post-Hoc Explanation Methods for Prostate Lesion Detection Effective for Radiology End Use?
1
Zitationen
4
Autoren
2022
Jahr
Abstract
Deep learning has demonstrated impressive performance for medical tasks such as cancer classification and lesion detection. While it has achieved impressive performance, it is a black-box algorithm and therefore is difficult to interpret. Interpretation is especially important in fields that are high-risk in nature such as the medical field. There recently has been various methods proposed to interpret deep learning algorithms. However, there are limited studies evaluating these explanation methods in clinical settings such as radiology. To that end, we conduct a pilot study that evaluates the effectiveness of explanation methods for radiology end use. We evaluate if explanation methods improve diagnosis performance and what method is preferred by radiologists. We also glean insight into what characteristics radiologists deem explainable. We found that explanation methods increase diagnosis performance however it is dependent on the individual method. We also find that the radiology cohort deem the themes insight, visualization, and accuracy to be the most sought after explainable characteristics. The insights garnered in this study have the potential to guide future developments and studies of explanation methods for clinical use.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.576 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.300 Zit.
"Why Should I Trust You?"
2016 · 14.396 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.