OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.03.2026, 03:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Federated Regularization Learning: an Accurate and Safe Method for Federated Learning

2021·16 Zitationen
Volltext beim Verlag öffnen

16

Zitationen

3

Autoren

2021

Jahr

Abstract

Distributed machine learning (ML) and other related techniques such as federated learning are facing a high risk of information leakage. Differential privacy (DP) is commonly used to protect privacy. However, it suffers from low accuracy due to the unbalanced data distribution in federated learning and additional noise brought by DP itself. In this paper, we propose a novel federated learning model that can protect data privacy from the gradient leakage attack and black-box membership inference attack (MIA). The proposed protection scheme makes the data hard to be reproduced and be distinguished from predictions. A small simulated attacker network is embedded as a regularization punishment to defend the malicious attacks. We further introduce a gradient modification method to secure the weight information and remedy the additional accuracy loss. The proposed privacy protection scheme is evaluated on MNIST and CIFAR-10, and compared with state-of-the-art DP-based federated learning models. Experimental results demonstrate that our model can successfully defend diverse external attacks to user-level privacy with negligible accuracy loss.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataAdversarial Robustness in Machine LearningArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen