OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 09:03

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

A Comparison of Saliency Methods for Deep Learning Explainability

2021·6 Zitationen
Volltext beim Verlag öffnen

6

Zitationen

7

Autoren

2021

Jahr

Abstract

Saliency methods are widely used to visually explain “black-box” deep learning model outputs to humans. These methods produce meaningful maps which aim to identify the salient part of an image responsible for, and so best explain, a Convolutional Neural Network (CNN) decision. In this paper, we consider the case of a classifier and the role of the two main categories of saliency methods: backpropagation and attribution. The first method is based on the gradient of the output with respect to the network parameters, while the second tests how local image perturbations affect the output. In this paper, we compare the Gradient method, Grad-CAM, Extremal perturbation, and DEEPCOVER, and highlight the complexity in determining which method provides the best explanation of a CNN's decision.

Ähnliche Arbeiten