Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Differential privacy for medical deep learning: methods, tradeoffs, and deployment implications
0
Zitationen
7
Autoren
2026
Jahr
Abstract
Abstract Differential privacy (DP) is a prominent technique for protecting sensitive patient data in medical deep learning (DL), yet deploying it without compromising clinical utility or equity remains challenging. This scoping review synthesizes applications of DP in medical DL across centralized and federated settings. A structured search identified 74 eligible studies published through March 2025. Across modalities and tasks, DP, especially via DP-SGD, can maintain clinically acceptable performance under moderate privacy budgets ( ϵ ≈ 10), particularly in imaging. However, strict privacy ( ϵ ≈ 1) often leads to substantial accuracy loss, with amplified degradation in smaller or heterogeneous datasets. Only a minority of studies evaluate fairness, and several report that DP can widen subgroup performance gaps. Beyond DP-SGD, alternative mechanisms, including generative modeling, local DP, and hybrid federated designs, are emerging, but reporting of privacy parameters remains inconsistent. We identify key gaps in fairness auditing and standardization, and outline priorities for equitable, clinically robust privacy-preserving DL.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.401 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.886 Zit.
Deep Learning with Differential Privacy
2016 · 5.612 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.593 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.570 Zit.