Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Rethinking Robustness of Model Attributions
0
Zitationen
4
Autoren
2023
Jahr
Abstract
For machine learning models to be reliable and trustworthy, their decisions must be interpretable. As these models find increasing use in safety-critical applications, it is important that not just the model predictions but also their explanations (as feature attributions) be robust to small human-imperceptible input perturbations. Recent works have shown that many attribution methods are fragile and have proposed improvements in either these methods or the model training. We observe two main causes for fragile attributions: first, the existing metrics of robustness (e.g., top-k intersection) over-penalize even reasonable local shifts in attribution, thereby making random perturbations to appear as a strong attack, and second, the attribution can be concentrated in a small region even when there are multiple important parts in an image. To rectify this, we propose simple ways to strengthen existing metrics and attribution methods that incorporate locality of pixels in robustness metrics and diversity of pixel locations in attributions. Towards the role of model training in attributional robustness, we empirically observe that adversarially trained models have more robust attributions on smaller datasets, however, this advantage disappears in larger datasets. Code is available at https://github.com/ksandeshk/LENS.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.699 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.991 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.814 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.500 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.707 Zit.