OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 21.03.2026, 09:47

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Protecting Machine Learning Models from Training Data Set Extraction

2024·0 Zitationen·Automatic Control and Computer Sciences
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2024

Jahr

Abstract

The problem of protecting machine learning models from the threat of data privacy violation implementing membership inference in training data sets is considered. A method of protective noising of the training set is proposed. It is experimentally shown that Gaussian noising of training data with a scale of 0.2 is the simplest and most effective way to protect machine learning models from membership inference in the training set. In comparison with alternatives, this method is easy to implement, universal in relation to types of models, and allows reducing the effectiveness of membership inference to 26 percentage points.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataAdversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen