Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring Uncertainty in Medical Federated Learning: A Survey
0
Zitationen
3
Autoren
2025
Jahr
Abstract
The adoption of artificial intelligence (AI) in healthcare requires not only accurate predictions but also a clear understanding of its reliability. In safety-critical domains such as medical imaging and diagnosis, clinicians must assess the confidence in model outputs to ensure safe decision making. Uncertainty quantification (UQ) addresses this need by providing confidence estimates and identifying situations in which models may fail. Such uncertainty estimates enable risk-aware deployment, improve model robustness, and ultimately strengthen clinical trust. Although prior studies have surveyed UQ in centralized learning, a systematic review in the federated learning (FL) context is still lacking. As a privacy-preserving collaborative paradigm, FL enables institutions to jointly train models without sharing raw patient data. However, compared with centralized learning, FL introduces more complex sources of uncertainty. In addition to data uncertainty caused by noisy inputs and model uncertainty from distributed optimization, there also exists distributional uncertainty arising from client heterogeneity and personalized uncertainty associated with site-specific biases. These intertwined uncertainties complicate model reliability and highlight the urgent need for UQ strategies tailored to federated settings. This survey reviews UQ in medical FL. We categorize uncertainties unique to FL and compare them with those in centralized learning. We examine the sources of uncertainty, existing FL architectures, UQ methods, and their integration with privacy-preserving techniques, and we analyze their advantages, limitations, and trade-offs. Finally, we highlight key challenges—scalable UQ under non-IID conditions, federated OOD detection, and clinical validation—and outline future opportunities such as hybrid UQ strategies and personalization. By combining methodological advances in UQ with application perspectives, this survey provides a structured overview to inform the development of more reliable and privacy-preserving FL systems in healthcare.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.401 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.885 Zit.
Deep Learning with Differential Privacy
2016 · 5.610 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.593 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.570 Zit.