Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Comparison of Saliency Methods for Deep Learning Explainability
6
Zitationen
7
Autoren
2021
Jahr
Abstract
Saliency methods are widely used to visually explain “black-box” deep learning model outputs to humans. These methods produce meaningful maps which aim to identify the salient part of an image responsible for, and so best explain, a Convolutional Neural Network (CNN) decision. In this paper, we consider the case of a classifier and the role of the two main categories of saliency methods: backpropagation and attribution. The first method is based on the gradient of the output with respect to the network parameters, while the second tests how local image perturbations affect the output. In this paper, we compare the Gradient method, Grad-CAM, Extremal perturbation, and DEEPCOVER, and highlight the complexity in determining which method provides the best explanation of a CNN's decision.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.792 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.331 Zit.
"Why Should I Trust You?"
2016 · 14.605 Zit.
Generative adversarial networks
2020 · 13.213 Zit.