Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Federated learning for maximum differential choice based on the global perspective
0
Zitationen
4
Autoren
2022
Jahr
Abstract
Federated learning has received extensive attention as a new distributed learning framework, which enables joint modeling without data sharing. However, it is still affected by the communication bottleneck, and most clients may not be able to participate in the joint learning modeling at the same time, resulting in slow convergence. To solve the above problems, we propose a federated learning aggregation algorithm based on a global perspective, which considers the data distribution of participating clients. The server builds a feature distribution table according to the data distribution, and each time the server selects a set of clients for training, it will cover more features to a greater extent to learn the global data more fully. Specifically, the selection of these clients is not random. When the server selects, it will construct a set of clients with the largest mutual distribution difference within the range of visible clients, and place it at the end of the selected chain after each training until all clients’ end is selected. We demonstrate the effectiveness of our work through comprehensive experiments and comparisons between the two most popular algorithms. Specifically, our algorithm achieves an average speedup of 40% compared to traditional algorithms.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.397 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.878 Zit.
Deep Learning with Differential Privacy
2016 · 5.604 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.592 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.569 Zit.