Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Health Equity Considerations in the Age of Artificial Intelligence
2
Zitationen
10
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is rapidly reshaping neurology, offering opportunities to improve efficiency, expand access to care, and enhance clinical decision making. Yet, without deliberate safeguards, AI can perpetuate or exacerbate existing health disparities, especially for systemically marginalized populations. This article examines the dual potential of AI, as both a driver of innovation and a source of bias, in the context of neurologic care. Drawing on literature review, stakeholder consultations, and practical examples, we outline the risks of AI worsening health disparities stemming from biased data sets, nonrepresentative training populations, and opaque algorithms. We also highlight opportunities for AI to promote health equity, including early disease detection in underserved settings, language-access tools, improved clinical trial diversity, and targeted quality improvement interventions. We propose 3 guiding principles for the neurology community to ensure that AI serves as a driver of equity: (1) ensuring diverse perspectives and community engagement in AI development; (2) expanding AI education and training for neurologists; and (3) establishing ethical policy and governance mechanisms. These recommendations are intended for clinicians, researchers, educators, health system leaders, policymakers, and professional societies and provide actionable strategies to integrate equity into every stage of AI design, implementation, and oversight. By centering inclusion, transparency, and accountability, AI can be harnessed to improve equitable access to high-quality neurologic care and to address long-standing disparities in neurologic health outcomes.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.197 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.047 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.410 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.
Autoren
Institutionen
- Memorial Sloan Kettering Cancer Center(US)
- University of California, San Francisco(US)
- Morehouse School of Medicine(US)
- Massachusetts General Hospital(US)
- Baylor College of Medicine(US)
- University of Tennessee Health Science Center(US)
- University of Oklahoma Health Sciences Center(US)
- Ochsner Medical Center(US)
- Ochsner Health System(US)
- Mayo Clinic Hospital(US)
- University of North Carolina at Chapel Hill(US)