Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Deepmarking: Leveraging Adversarial Noise for Membership Inference Attacks
0
Zitationen
2
Autoren
2024
Jahr
Abstract
The performance and inference capabilities of neural networks rely heavily on the training data they are exposed to. Generally, larger dataset yield more powerful models. This incentive to continuously extend the training sets of models can lead to data exploitation, where data is being used against the owner's wishes to train neural networks. Even if such misuse of data is suspected, it is currently next to impossible to determine its veracity. This research explores the utilization of adversarial noise to manipulate the performance of neural networks, investigating how those findings can be used to infer whether a collection of data is a member of a training set. It proposes a novel approach to generate “deepmarked” images containing adversarial noise that maximizes its detectability as a training set member, while remaining visually indistinguishable from the original data. The findings of this study demonstrate the feasibility of detecting and inferring the membership status of a data collection within a neural network's training set using the proposed technique, in a restricted black-box setting where the model output only contains the single highest likelihood class.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.467 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.603 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.504 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.368 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.577 Zit.