Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Who Does Your Algorithm Fail? Investigating Age and Ethnic Bias in the MAMA-MIA Dataset
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Deep learning models aim to improve diagnostic workflows, but fairness evaluation remains underexplored beyond classification, e.g., in image segmentation. Unaddressed segmentation bias can lead to disparities in the quality of care for certain populations, potentially compounded across clinical decision points and amplified through iterative model development. Here, we audit the fairness of the automated segmentation labels provided in the breast cancer tumor segmentation dataset MAMA-MIA. We evaluate automated segmentation quality across age, ethnicity, and data source. Our analysis reveals an intrinsic age-related bias against younger patients that continues to persist even after controlling for confounding factors, such as data source. We hypothesize that this bias may be linked to physiological factors, a known challenge for both radiologists and automated systems. Finally, we show how aggregating data from multiple data sources influences site-specific ethnic biases, underscoring the necessity of investigating data at a granular level.
Ähnliche Arbeiten
A survey on deep learning in medical image analysis
2017 · 13.819 Zit.
Dermatologist-level classification of skin cancer with deep neural networks
2017 · 13.394 Zit.
A survey on Image Data Augmentation for Deep Learning
2019 · 11.983 Zit.
QuPath: Open source software for digital pathology image analysis
2017 · 8.334 Zit.
Radiomics: Images Are More than Pictures, They Are Data
2015 · 8.101 Zit.