Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Attacking Important Pixels for Anchor-free Detectors
0
Zitationen
7
Autoren
2023
Jahr
Abstract
Deep neural networks have been demonstrated to be vulnerable to adversarial attacks: subtle perturbation can completely change the prediction result. Existing adversarial attacks on object detection focus on attacking anchor-based detectors, which may not work well for anchor-free detectors. In this paper, we propose the first adversarial attack dedicated to anchor-free detectors. It is a category-wise attack that attacks important pixels of all instances of a category simultaneously. Our attack manifests in two forms, sparse category-wise attack (SCA) and dense category-wise attack (DCA), that minimize the $L_0$ and $L_\infty$ norm-based perturbations, respectively. For DCA, we present three variants, DCA-G, DCA-L, and DCA-S, that select a global region, a local region, and a semantic region, respectively, to attack. Our experiments on large-scale benchmark datasets including PascalVOC, MS-COCO, and MS-COCO Keypoints indicate that our proposed methods achieve state-of-the-art attack performance and transferability on both object detection and human pose estimation tasks.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.378 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.475 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.373 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.322 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.514 Zit.