Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reconstruction-Based Membership Inference Attacks are Easier on Difficult Problems.
1
Zitationen
3
Autoren
2021
Jahr
Abstract
Membership inference attacks (MIA) try to detect if data samples were used to train a neural network model, e.g. to detect copyright abuses. We show that models with higher dimensional input and output are more vulnerable to MIA, and address in more detail models for image translation and semantic segmentation, including medical image segmentation. We show that reconstruction-errors can lead to very effective MIA attacks as they are indicative of memorization. Unfortunately, reconstruction error alone is less effective at discriminating between non-predictable images used in training and easy to predict images that were never seen before. To overcome this, we propose using a novel predictability error that can be computed for each sample, and its computation does not require a training set. Our membership error, obtained by subtracting the predictability error from the reconstruction error, is shown to achieve high MIA accuracy on an extensive number of benchmarks.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.542 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.727 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.626 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.419 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.609 Zit.