Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
FedRRA: Reputation-Aware Robust Federated Learning against Poisoning Attacks
9
Zitationen
5
Autoren
2023
Jahr
Abstract
As an emerging machine learning paradigm, federated learning (FL) allows multiple participants to train a shared global model collaboratively on decentralized data while protecting data privacy. But traditional FL is susceptible to adversarial poisoning attacks, the global model in an FL system poisoned by adversaries may fail to converge or present accuracy degradation. To defend against data poisoning and model poisoning attacks simultaneously, we propose a Federated learning framework with a Reputation-aware Robust Aggregation (FedRRA) rule. It involves a two-step adversary detection: 1) a DBSCAN algorithm excludes models with obviously biased parameters and 2) an accuracy evaluation process punishes models with low accuracy. The reputation calculated in the two-step detection determines that clients with low reputations are removed before the aggregation, which alleviates the negative influence of models corrupted by adversaries. Extensive experiments demonstrate that FedRRA is superior to the state-of-the-art robust FL baseline in defending against model poisoning and data poisoning attacks.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.395 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.872 Zit.
Deep Learning with Differential Privacy
2016 · 5.594 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.591 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.563 Zit.