OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 11.03.2026, 14:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Bias in machine learning models can be significantly mitigated by careful training: Evidence from neuroimaging studies

2023·72 Zitationen·Proceedings of the National Academy of SciencesOpen Access
Volltext beim Verlag öffnen

72

Zitationen

3

Autoren

2023

Jahr

Abstract

Despite the great promise that machine learning has offered in many fields of medicine, it has also raised concerns about potential biases and poor generalization across genders, age distributions, races and ethnicities, hospitals, and data acquisition equipment and protocols. In the current study, and in the context of three brain diseases, we provide evidence which suggests that when properly trained, machine learning models can generalize well across diverse conditions and do not necessarily suffer from bias. Specifically, by using multistudy magnetic resonance imaging consortia for diagnosing Alzheimer's disease, schizophrenia, and autism spectrum disorder, we find that well-trained models have a high area-under-the-curve (AUC) on subjects across different subgroups pertaining to attributes such as gender, age, racial groups and different clinical studies and are unbiased under multiple fairness metrics such as demographic parity difference, equalized odds difference, equal opportunity difference, etc. We find that models that incorporate multisource data from demographic, clinical, genetic factors, and cognitive scores are also unbiased. These models have a better predictive AUC across subgroups than those trained only with imaging features, but there are also situations when these additional features do not help.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationMachine Learning in HealthcareHealth, Environment, Cognitive Aging
Volltext beim Verlag öffnen