Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluate Inference Attacks: Attack and Defense against 2D Semantic Segmentation Models
1
Zitationen
5
Autoren
2025
Jahr
Abstract
Deep learning (DL)-based 2D semantic segmentation (SS) plays a vital role in the perception task of autonomous driving. However, the SS model relies on DL, which makes it vulnerable to inference attacks. Recent research has discovered that SS models are susceptible to the membership inference attack, yet other inference attacks remain underexplored. Our study fills this gap by comprehensively investigating the vulnerabilities of two widely used RGB image-based 2D SS models ( DeepLabV3 and DeepLabV3+ ) against three inference attacks: membership inference, attribute inference, and model inversion. We evaluate the attack effectiveness on three backbones (MobileNetV2, ResNet50, and ResNet101) across three datasets (VOC2012, CityScapes, and ADE20K), where the attack accuracy can reach up to 95% (membership inference), 40% (attribute inference), and 70% (model inversion), revealing that deeper networks are more prone to privacy leakage in inference attacks. Consequently, we introduce differential privacy and model pruning as defensive mechanisms, significantly reducing attack performance, where the average accuracy drops 20% among the three inference attacks. Our findings reveal critical privacy vulnerabilities in SS tasks and offer practical guidance for developing more robust SS models in autonomous driving.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.338 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.418 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.303 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.301 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.499 Zit.