Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Sociodemographic bias in clinical machine learning models: a scoping review of algorithmic bias instances and mechanisms
19
Zitationen
8
Autoren
2024
Jahr
Abstract
BACKGROUND AND OBJECTIVES: Clinical machine learning (ML) technologies can sometimes be biased and their use could exacerbate health disparities. The extent to which bias is present, the groups who most frequently experience bias, and the mechanism through which bias is introduced in clinical ML applications is not well described. The objective of this study was to examine instances of bias in clinical ML models. We identified the sociodemographic subgroups PROGRESS that experienced bias and the reported mechanisms of bias introduction. METHODS: We searched MEDLINE, EMBASE, PsycINFO, and Web of Science for all studies that evaluated bias on sociodemographic factors within ML algorithms created for the purpose of facilitating clinical care. The scoping review was conducted according to the Joanna Briggs Institute guide and reported using the PRISMA (Preferred Reporting Items for Systematic reviews and Meta-Analyses) extension for scoping reviews. RESULTS: We identified 6448 articles, of which 760 reported on a clinical ML model and 91 (12.0%) completed a bias evaluation and met all inclusion criteria. Most studies evaluated a single sociodemographic factor (n = 56, 61.5%). The most frequently evaluated sociodemographic factor was race (n = 59, 64.8%), followed by sex/gender (n = 41, 45.1%), and age (n = 24, 26.4%), with one study (1.1%) evaluating intersectional factors. Of all studies, 74.7% (n = 68) reported that bias was present, 18.7% (n = 17) reported bias was not present, and 6.6% (n = 6) did not state whether bias was present. When present, 87% of studies reported bias against groups with socioeconomic disadvantage. CONCLUSION: Most ML algorithms that were evaluated for bias demonstrated bias on sociodemographic factors. Furthermore, most bias evaluations concentrated on race, sex/gender, and age, while other sociodemographic factors and their intersection were infrequently assessed. Given potential health equity implications, bias assessments should be completed for all clinical ML models.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.674 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.583 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.105 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.862 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.