Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Privacy-Preserving Machine Learning Models for Medical Data Ensuring Security in Smart Healthcare Systems
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Machine Learning (ML) is being adopted in the healthcare industry across diagnostics, treatment options, and analysis of patient information. The management of data increases risks to the patient's private information. This chapter explores the theoretical frameworks and consequences of privacy-enhancing ML methods, namely, differential privacy, Federated Learning (FL), and homomorphic encryption, to apply medical data for meaningful analysis while maintaining the non-disclosure of patient information. Differential privacy adds noise control to minimize identifying data corresponding to an individual. FL allows the training of models while avoiding data centralization, and homomorphic encryption will enable computations on top of encrypted data to guarantee data safety during processing. We also explain how blockchain could improve these privacy-preserving approaches; it can provide safe data sharing, models, and audit and accountability of model management.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.395 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.872 Zit.
Deep Learning with Differential Privacy
2016 · 5.595 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.591 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.564 Zit.