Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Fairness Artificial Intelligence in Clinical Decision Support: Mitigating Effect of Health Disparity
4
Zitationen
5
Autoren
2024
Jahr
Abstract
The United States, as well as the global community, experiences health disparities among socially disadvantaged populations. These disparities often manifest in the data utilized for AI model training. Without appropriate de-biasing strategies, models trained to optimize predictive performance may inadvertently capture and perpetuate these inherent biases. The utilization of biased models in clinical decision-making can inflict harm upon patients from disadvantaged groups and exacerbate disparities when these decisions are documented and employed to train subsequent AI models. Unlike conventional <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">correlation-based</i> methods, we aim to mitigate the negative impacts of health disparity by answering a <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">causal inference</i> question for fairness: <italic xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">would the clinical decision support system make a different decision if the patient had a different sensitive attribute (e.g., race)?</i> Recognizing the high computational complexity of developing causal models, we propose a flexible and efficient causal-model-free algorithm, <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">CFReg</monospace>, which provides causal fairness for supervised machine learning models. In addition, <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">CFReg</monospace> also develops a novel evaluation metric to quantify fairness within clinical settings. We first validate <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">CFReg</monospace> using a healthcare dataset of 48,784 patients focused on care management, then generalize to another four benchmark datasets with racial and ethnic disparity, including law school admission, adult income, criminal recidivism, and violent crime prediction. Experimental results demonstrate that <monospace xmlns:mml="http://www.w3.org/1998/Math/MathML" xmlns:xlink="http://www.w3.org/1999/xlink">CFReg</monospace> outperforms baseline approaches in both fairness and accuracy, achieving a good trade-off between model fairness and supervised classification performance.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.231 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.084 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.444 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.423 Zit.