OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 07:31

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Poster: Membership Inference Attacks via Contrastive Learning

2023·3 Zitationen
Volltext beim Verlag öffnen

3

Zitationen

4

Autoren

2023

Jahr

Abstract

Since machine learning model is often trained on a limited data set, the model is trained multiple times on the same data sample, which causes the model to memorize most of the training set data. Membership Inference Attacks (MIAs) exploit this feature to determine whether a data sample is used for training a machine learning model. However, in realistic scenarios, it is difficult for the adversary to obtain enough qualified samples that mark accurate identity information, especially since most samples are non-members in real world applications. To address this limitation, in this paper, we propose a new attack method called CLMIA, which uses unsupervised contrastive learning to train an attack model. Meanwhile, in CLMIA, we require only a small amount of data with known membership status to fine-tune the attack model. We evaluated the performance of the attack using ROC curves showing a higher TPR at low FPR compared to other schemes.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataAdversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen