Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Black-box Adversarial Attack against Visual Interpreters for Deep Neural Networks
1
Zitationen
2
Autoren
2023
Jahr
Abstract
With the rapid development of deep neural networks (DNNs), eXplainable AI, which provides a basis for prediction on inputs, has become increasingly important. In addition, DNNs have a vulnerability called an Adversarial Example (AE), which can cause incorrect output by applying special perturbations to inputs. Potential vulnerabilities can also exist in image interpreters such as GradCAM, necessitating their investigation, as these vulnerabilities could potentially result in misdiagnosis within medical imaging. Therefore, this study proposes a black-box adversarial attack method that misleads the image interpreter using Sep-CMA-ES. The proposed method deceptively shifts the focus area of the image interpreter to a different location from that of the original image while maintaining the same predictive labels.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.532 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.712 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.612 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.410 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.605 Zit.