Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring Robust Misclassifications of Neural Networks to Enhance Adversarial Attacks
2
Zitationen
5
Autoren
2021
Jahr
Abstract
Progress in making neural networks more robust against adversarial attacks is mostly marginal, despite the great efforts of the research community. Moreover, the robustness evaluation is often imprecise, making it difficult to identify promising approaches. We analyze the classification decisions of 19 different state-of-the-art neural networks trained to be robust against adversarial attacks. Our findings suggest that current untargeted adversarial attacks induce misclassification towards only a limited amount of different classes. Additionally, we observe that both over- and under-confidence in model predictions result in an inaccurate assessment of model robustness. Based on these observations, we propose a novel loss function for adversarial attacks that consistently improves attack success rate compared to prior loss functions for 19 out of 19 analyzed models.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.694 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.984 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.802 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.499 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.702 Zit.