OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 08:26

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Data Poisoning in Sequential and Parallel Federated Learning

2022·35 Zitationen
Volltext beim Verlag öffnen

35

Zitationen

2

Autoren

2022

Jahr

Abstract

Federated Machine Learning has recently become a prominent approach to leverage data that is distributed across different clients, without the need to centralize data. Models are trained locally, and only model parameters are shared and aggregated into a global model. Federated learning can increase privacy of sensitive data, as the data itself is never shared, and benefit from the distributed setting by utilizing computational resources of the clients. Adversarial Machine Learning attacks machine learning systems in respect to their confidentiality, integrity or availability. Recent research has shown that many forms of machine learning are susceptible to these types of attacks. Besides its advantages, federated learning opens new attack surfaces due to its distributed nature, which amplifies concerns of adversarial attacks. In this paper, we evaluate data poisoning attacks in federated settings. By altering certain training inputs that are used in the training phase with a specific pattern, an adversary may later trigger malicious behavior in the prediction phase. We show on datasets for traffic sign and face recognition that federated learning is effective on a similar level as centralized learning, but is indeed vulnerable to data poisoning attacks. We test both a parallel as well as a sequential (incremental cyclic) federated learning, and perform an in-depth analysis on several hyper-parameters of the adversaries.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataAdversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen