Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Fairness and AI generalizability in medical image analysis
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The rapid evolution of artificial intelligence (AI) and in particular machine learning (ML) for healthcare applications has opened many new exciting opportunities to automatically analyze increasingly complex clinical data, especially related to medical imaging, one of the biggest data contributors in health care. However, while machine learning models have demonstrated considerable potential in the research setting to improve and accelerate diagnostic accuracy, reduce clinician workload, and enable integration of multi-modal data, their real-world deployment in clinical settings remains limited in many cases. A major obstacle is that ML models trained on medical images consistently fail to generalize well and perform poorly when applied to data from institutions, scanners, and patient populations that were not or poorly represented in the training set. These performance gaps often stem from biological and non-biological variations and biases that are only spuriously but not causally correlated with the medical task of interest. However, it still remains an open question how such biases propagate through deep learning model architectures and shape learned representations. This invited paper summarizes our recent efforts to address these challenges through controlled bias experiments and distributed learning methods. More precisely, the Simulated Bias in Artificial Medical Images (SimBA) framework is introduced, which enables the generation of realistic brain MRI datasets with known and fully controllable morphological and intensity-based biases, thereby facilitating counterfactual analyses of how biases are encoded and used by deep learning models. Using SimBA, we demonstrate that standard convolutional neural networks encode biases across all layers and that shortcut learning depends on various factors, including spatial proximity, bias salience, and class prevalence. We further discuss distributed training strategies as a scalable solution to overcome structural barriers to data sharing and enhance deep learning model generalizability. Together, this work provides foundational insights toward developing robust, interpretable, and equitable ML solutions for the analysis of medical imaging data and downstream computer-aided diagnosis tasks.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.578 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.470 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.984 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.814 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.