Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Privacy-aware and Resource-saving Collaborative Learning for Healthcare in Cloud Computing
46
Zitationen
5
Autoren
2020
Jahr
Abstract
Electronic health records (EHR), generated in healthcare, contain extensive digital information, such as diagnoses, medications and complications. Recently, many studies have focused on constructing deep learning (DL) models with EHR data to improve the quality of healthcare services. However, in traditional centralized training, the collection of EHR causes serious privacy issues due to vulnerable transmission channels and untrusted DL service providers. An alternative that can mitigate the above privacy threat is federated learning (FL). It enables multiple healthcare institutions to learn a global predictive model by exchanging locally calculated updates without disclosing the private dataset. Unfortunately, the latest studies have shown that the local updates still expose sensitive information about the original training data. While several privacy-preserving FL protocols have been proposed, few prior works focused on energy consumption issues. Specifically, local training requires extensive computational resources, which is prohibitively expensive for resource-limited institutions. To overcome the above problems, we propose PRCL, a Privacy-aware and Resource-saving Collaborative Learning protocol. To reduce the local computational overhead, we design a novel model splitting method that partitions the neural network into three parts and outsources the computationally large middle part to cloud servers. By using the lightweight data perturbation and packed partially homomorphic encryption, PRCL protects the privacy of the original data and labels, as well as the parameters of the model. Moreover, we analyze the security of the proposed protocol, and demonstrate the superior performance of PRCL in terms of accuracy and efficiency.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.395 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.867 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.591 Zit.
Deep Learning with Differential Privacy
2016 · 5.587 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.559 Zit.