OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.04.2026, 11:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Deepmarking: Leveraging Adversarial Noise for Membership Inference Attacks

2024·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

2

Autoren

2024

Jahr

Abstract

The performance and inference capabilities of neural networks rely heavily on the training data they are exposed to. Generally, larger dataset yield more powerful models. This incentive to continuously extend the training sets of models can lead to data exploitation, where data is being used against the owner's wishes to train neural networks. Even if such misuse of data is suspected, it is currently next to impossible to determine its veracity. This research explores the utilization of adversarial noise to manipulate the performance of neural networks, investigating how those findings can be used to infer whether a collection of data is a member of a training set. It proposes a novel approach to generate “deepmarked” images containing adversarial noise that maximizes its detectability as a training set member, while remaining visually indistinguishable from the original data. The findings of this study demonstrate the feasibility of detecting and inferring the membership status of a data collection within a neural network's training set using the proposed technique, in a restricted black-box setting where the model output only contains the single highest likelihood class.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Adversarial Robustness in Machine LearningPrivacy-Preserving Technologies in DataArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen