Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Subpopulation-specific Machine Learning Prognosis for Underrepresented Patients with Double Prioritized Bias Correction
5
Zitationen
5
Autoren
2021
Jahr
Abstract
Abstract Background Many clinical datasets are intrinsically imbalanced, dominated by overwhelming majority groups. Off-the-shelf machine learning models that optimize the prognosis of majority patient types (e.g., healthy class) may cause substantial errors on the minority prediction class (e.g., disease class) and demographic subgroups (e.g., Black or young patients). In the typical one-machine-learning-model-fits-all paradigm, racial and age disparities are likely to exist, but unreported. In addition, some widely used whole-population metrics give misleading results. Methods We design a double prioritized (DP) bias correction technique to mitigate representational biases in machine learning-based prognosis. Our method trains customized machine learning models for specific ethnicity or age groups, a substantial departure from the one-model-predicts-all convention. We compare with other sampling and reweighting techniques in mortality and cancer survivability prediction tasks. Results We first provide empirical evidence showing various prediction deficiencies in a typical machine learning setting without bias correction. For example, missed death cases are 3.14 times higher than missed survival cases for mortality prediction. Then, we show DP consistently boosts the minority class recall for underrepresented groups, by up to 38.0%. DP also reduces relative disparities across race and age groups, e.g., up to 88.0% better than the 8 existing sampling solutions in terms of the relative disparity of minority class recall. Cross-race and cross-age-group evaluation also suggests the need for subpopulation-specific machine learning models. Conclusions Biases exist in the widely accepted one-machine-learning-model-fits-all-population approach. We invent a bias correction method that produces specialized machine learning prognostication models for underrepresented racial and age groups. This technique may reduce life-threatening prediction mistakes for minority populations. Plain Language Summary This work aims to improve the prediction accuracy of machine learning models in medical applications, e.g., estimating the likelihood of a patient dying in an emergency room visit or surviving cancer. Inaccurate prediction may produce life-threatening consequences. We first examine how biases in training data impact prediction outcomes, in particular how underrepresented patients (e.g., young patients or patients of color) are impacted. Then, we design a double prioritized (DP) bias correction technique. It allows one to train machine learning models for specific demographic groups, e.g., one machine learning model for Black patients and another model for Asian patients. Our results confirm the need for training subpopulation-specific machine learning models. Our work helps improve the medical care of minority patients in the age of digital health.
Ähnliche Arbeiten
"Why Should I Trust You?"
2016 · 14.261 Zit.
A Comprehensive Survey on Graph Neural Networks
2020 · 8.629 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Artificial intelligence in healthcare: past, present and future
2017 · 4.396 Zit.