Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Privacy Regularization: Joint Privacy-Utility Optimization in Language Models
5
Zitationen
6
Autoren
2021
Jahr
Abstract
Neural language models are known to have a high capacity for memorization of training samples. This may have serious privacy implications when training models on user content such as email correspondence. Differential privacy (DP), a popular choice to train models with privacy guarantees, comes with significant costs in terms of utility degradation and disparate impact on subgroups of users. In this work, we introduce two privacy-preserving regularization methods for training language models that enable joint optimization of utility and privacy through (1) the use of a discriminator and (2) the inclusion of a triplet-loss term. We compare our methods with DP through extensive evaluation. We show the advantages of our regularizers with favorable utility-privacy trade-off, faster training with the ability to tap into existing optimization approaches, and ensuring uniform treatment of under-represented subgroups.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.419 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.924 Zit.
Deep Learning with Differential Privacy
2016 · 5.655 Zit.
Federated Machine Learning
2019 · 5.629 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.601 Zit.