Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Grouped Federated Meta-Learning for Privacy-Preserving Rare Disease Diagnosis
3
Zitationen
7
Autoren
2024
Jahr
Abstract
Federated learning (FL) has been widely applied in medical field, which allows clients to collaboratively train global models without sharing local data. Nevertheless, the diversity and scarcity of samples from rare diseases may result in a decline in the performance of local models on client-side due to using a singular global model. Moreover, direct transmission of local models or parameters will likely lead to user privacy violations. To solve these problems, we propose a Grouped Federated Meta-Learning (GrFML) method to improve the performance of local personalization models while protecting data privacy. Specifically, we first utilize a self-attention mechanism to extract partial features from the client’s local data, which are uploaded to the server (medical data is susceptible to perturbation and data integrity, thus this process does not expose the private data). The server groups clients with similar features based on these extracted features. Then, multiple meta-models are trained on these groups and distributed back to the clients to enhance the performance of the client’s local models. Furthermore, during the FL process, we introduce dynamic perturbation to the uploaded gradients based on the model’s test accuracy to protect their privacy. Typically, the perturbation magnitude is directly proportional to the model’s test accuracy. Extensive experiments shown that the GrFML model significantly improves client personalization model accuracy and achieves a good privacy-utility trade-off.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.400 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.884 Zit.
Deep Learning with Differential Privacy
2016 · 5.608 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.592 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.570 Zit.