Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
<scp>PriMed</scp>: Private federated training and encrypted inference on medical images in healthcare
5
Zitationen
6
Autoren
2023
Jahr
Abstract
Abstract In healthcare, patient information is a sparse critical asset considered as private data and is often protected by law. It is also the domain which is least explored in the field of Machine Learning. The main reason for this is to build efficient artificial intelligence (AI) based models for preliminary diagnosis of various diseases, it would require a large corpus of data which can be obtained by pooling in patient information from multiple sources. However, for these sources to agree to sharing their data across distributed systems for training algorithms and models, there has to be an assurance that there will be no disclosure of the personally identifiable information (PII) of the respective Data Owners. This paper proposes PriMed, an approach to build robust privacy preserving additions to convolutional neural networks (CNN) for training and performing inference on medical images without compromising privacy. Since privacy of the data is preserved, large amounts of data can be effectively accumulated to increase the accuracy and efficiency of AI models in the field of healthcare. This involves implementing a hybrid of privacy‐enhancing techniques like Federated Learning, Differential Privacy, and Homomorphic Encryption to provide a private and secure environment for learning through data.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.395 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.872 Zit.
Deep Learning with Differential Privacy
2016 · 5.594 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.591 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.563 Zit.