Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Training Data Leakage Analysis in Language Models
22
Zitationen
7
Autoren
2021
Jahr
Abstract
Recent advances in neural network based language models lead to successful deployments of such models, improving user experience in various applications. It has been demonstrated that strong performance of language models comes along with the ability to memorize rare training samples, which poses serious privacy threats in case the model is trained on confidential user content. In this work, we introduce a methodology that investigates identifying the user content in the training data that could be leaked under a strong and realistic threat model. We propose two metrics to quantify user-level data leakage by measuring a model's ability to produce unique sentence fragments within training data. Our metrics further enable comparing different models trained on the same data in terms of privacy. We demonstrate our approach through extensive numerical studies on both RNN and Transformer based models. We further illustrate how the proposed metrics can be utilized to investigate the efficacy of mitigations like differentially private training or API hardening.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.413 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.916 Zit.
Deep Learning with Differential Privacy
2016 · 5.640 Zit.
Federated Machine Learning
2019 · 5.609 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.600 Zit.