OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 19:08

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Fairness in Cardiac MR Image Analysis: An Investigation of Bias Due to\n Data Imbalance in Deep Learning Based Segmentation

2021·0 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

0

Zitationen

7

Autoren

2021

Jahr

Abstract

The subject of "fairness" in artificial intelligence (AI) refers to assessing\nAI algorithms for potential bias based on demographic characteristics such as\nrace and gender, and the development of algorithms to address this bias. Most\napplications to date have been in computer vision, although some work in\nhealthcare has started to emerge. The use of deep learning (DL) in cardiac MR\nsegmentation has led to impressive results in recent years, and such techniques\nare starting to be translated into clinical practice. However, no work has yet\ninvestigated the fairness of such models. In this work, we perform such an\nanalysis for racial/gender groups, focusing on the problem of training data\nimbalance, using a nnU-Net model trained and evaluated on cine short axis\ncardiac MR data from the UK Biobank dataset, consisting of 5,903 subjects from\n6 different racial groups. We find statistically significant differences in\nDice performance between different racial groups. To reduce the racial bias, we\ninvestigated three strategies: (1) stratified batch sampling, in which batch\nsampling is stratified to ensure balance between racial groups; (2) fair\nmeta-learning for segmentation, in which a DL classifier is trained to classify\nrace and jointly optimized with the segmentation model; and (3) protected group\nmodels, in which a different segmentation model is trained for each racial\ngroup. We also compared the results to the scenario where we have a perfectly\nbalanced database. To assess fairness we used the standard deviation (SD) and\nskewed error ratio (SER) of the average Dice values. Our results demonstrate\nthat the racial bias results from the use of imbalanced training data, and that\nall proposed bias mitigation strategies improved fairness, with the best SD and\nSER resulting from the use of protected group models.\n

Ähnliche Arbeiten

Autoren

Themen

Cardiac Imaging and DiagnosticsArtificial Intelligence in Healthcare and EducationRadiomics and Machine Learning in Medical Imaging
Volltext beim Verlag öffnen