OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 06.05.2026, 14:29

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Fairness and AI generalizability in medical image analysis

2026·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

1

Autoren

2026

Jahr

Abstract

The rapid evolution of artificial intelligence (AI) and in particular machine learning (ML) for healthcare applications has opened many new exciting opportunities to automatically analyze increasingly complex clinical data, especially related to medical imaging, one of the biggest data contributors in health care. However, while machine learning models have demonstrated considerable potential in the research setting to improve and accelerate diagnostic accuracy, reduce clinician workload, and enable integration of multi-modal data, their real-world deployment in clinical settings remains limited in many cases. A major obstacle is that ML models trained on medical images consistently fail to generalize well and perform poorly when applied to data from institutions, scanners, and patient populations that were not or poorly represented in the training set. These performance gaps often stem from biological and non-biological variations and biases that are only spuriously but not causally correlated with the medical task of interest. However, it still remains an open question how such biases propagate through deep learning model architectures and shape learned representations. This invited paper summarizes our recent efforts to address these challenges through controlled bias experiments and distributed learning methods. More precisely, the Simulated Bias in Artificial Medical Images (SimBA) framework is introduced, which enables the generation of realistic brain MRI datasets with known and fully controllable morphological and intensity-based biases, thereby facilitating counterfactual analyses of how biases are encoded and used by deep learning models. Using SimBA, we demonstrate that standard convolutional neural networks encode biases across all layers and that shortcut learning depends on various factors, including spatial proximity, bias salience, and class prevalence. We further discuss distributed training strategies as a scalable solution to overcome structural barriers to data sharing and enhance deep learning model generalizability. Together, this work provides foundational insights toward developing robust, interpretable, and equitable ML solutions for the analysis of medical imaging data and downstream computer-aided diagnosis tasks.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Ethics and Social Impacts of AI
Volltext beim Verlag öffnen