Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Combining Stochastic Defenses to Resist Gradient Inversion: An Ablation Study
2
Zitationen
3
Autoren
2022
Jahr
Abstract
Gradient Inversion (GI) attacks are a ubiquitous threat in Federated Learning (FL) as they exploit gradient leakage to reconstruct supposedly private training data. Common defense mechanisms such as Differential Privacy (DP) or stochastic Privacy Modules (PMs) introduce randomness during gradient computation to prevent such attacks. However, we pose that if an attacker effectively mimics a client's stochastic gradient computation, the attacker can circumvent the defense and reconstruct clients' private training data. This paper introduces several targeted GI attacks that leverage this principle to bypass common defense mechanisms. As a result, we demonstrate that no individual defense provides sufficient privacy protection. To address this issue, we propose to combine multiple defenses. We conduct an extensive ablation study to evaluate the influence of various combinations of defenses on privacy protection and model utility. We observe that only the combination of DP and a stochastic PM was sufficient to decrease the Attack Success Rate (ASR) from 100% to 0%, thus preserving privacy. Moreover, we found that this combination of defenses consistently achieves the best trade-off between privacy and model utility.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.400 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.884 Zit.
Deep Learning with Differential Privacy
2016 · 5.608 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.592 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.570 Zit.