Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Membership Inference Attack Using Self Influence Functions
3
Zitationen
2
Autoren
2022
Jahr
Abstract
Member inference (MI) attacks aim to determine if a specific data sample was used to train a machine learning model. Thus, MI is a major privacy threat to models trained on private sensitive data, such as medical records. In MI attacks one may consider the black-box settings, where the model's parameters and activations are hidden from the adversary, or the white-box case where they are available to the attacker. In this work, we focus on the latter and present a novel MI attack for it that employs influence functions, or more specifically the samples' self-influence scores, to perform the MI prediction. We evaluate our attack on CIFAR-10, CIFAR-100, and Tiny ImageNet datasets, using versatile architectures such as AlexNet, ResNet, and DenseNet. Our attack method achieves new state-of-the-art results for both training with and without data augmentations. Code is available at https://github.com/giladcohen/sif_mi_attack.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.378 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.475 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.373 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.322 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.514 Zit.