OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 26.03.2026, 21:51

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Differential privacy for medical deep learning: methods, tradeoffs, and deployment implications

2026·0 Zitationen·npj Digital MedicineOpen Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2026

Jahr

Abstract

Abstract Differential privacy (DP) is a prominent technique for protecting sensitive patient data in medical deep learning (DL), yet deploying it without compromising clinical utility or equity remains challenging. This scoping review synthesizes applications of DP in medical DL across centralized and federated settings. A structured search identified 74 eligible studies published through March 2025. Across modalities and tasks, DP, especially via DP-SGD, can maintain clinically acceptable performance under moderate privacy budgets ( ϵ ≈ 10), particularly in imaging. However, strict privacy ( ϵ ≈ 1) often leads to substantial accuracy loss, with amplified degradation in smaller or heterogeneous datasets. Only a minority of studies evaluate fairness, and several report that DP can widen subgroup performance gaps. Beyond DP-SGD, alternative mechanisms, including generative modeling, local DP, and hybrid federated designs, are emerging, but reporting of privacy parameters remains inconsistent. We identify key gaps in fairness auditing and standardization, and outline priorities for equitable, clinically robust privacy-preserving DL.

Ähnliche Arbeiten