Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Ein externer Link zum Volltext ist derzeit nicht verfügbar.
Artificial Intelligence, Health Equity, and Social Determinants of Health: Designing AI to Reduce (Not Reinforce) Healthcare Disparities
0
Zitationen
1
Autoren
2024
Jahr
Abstract
Artificial intelligence (AI) is increasingly embedded in clinical decision support, diagnostic workflows, and population health management, offering the potential to improve care delivery and expand access at scale. However, when developed without explicit attention to health equity, AI systems risk reproducing or amplifying disparities driven by social determinants of health (SDOH), structural inequities, and unequal access to care. Empirical evidence demonstrates that widely deployed clinical algorithms can systematically disadvantage historically marginalized populations when biased proxies, inequitable labels, or incomplete data are used to estimate health need or risk. This paper examines how inequities arise across the medical AI lifecycle, including problem formulation, data collection and labeling, model development, evaluation, deployment, and post-deployment use. We synthesize evidence from clinical, ethical, and machine learning literature to show that disparities often emerge not from algorithmic intent, but from misaligned objectives, inequitable data-generating processes, and reliance on aggregate performance metrics that obscure subgroup harm. We argue that conventional accuracy-based evaluation is insufficient for assessing the safety and equity of AI in real-world clinical settings. To address these challenges, we propose an equity-centered framework for medical AI that integrates SDOH into system design and governance while maintaining clinical validity. The framework emphasizes equity-aware problem framing, responsible incorporation of SDOH variables, disaggregated and uncertainty-aware evaluation, transparent documentation of intended use and limitations, and continuous monitoring of subgroup-specific outcomes after deployment. By positioning equity as a core design and evaluation objective rather than a post-hoc consideration, this work provides practical guidance for developing AI systems that reduce-rather than reinforce-systemic healthcare disparities.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.521 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.412 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.891 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.575 Zit.