Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Before You Build: Seven Misconceptions Health AI Developers Must Confront in Global Public Health
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Every week, engineers, data scientists, and entrepreneurs approach global health organizations with a similar aspiration: to use artificial intelligence (AI) to improve population health. They arrive with genuine passion and a set of assumptions about healthcare and health systems that are, with striking regularity, misaligned with global realities. This essay introduces ‘The Context Gap in Global Health AI,’ a framework that highlights seven foundational misconceptions as a structured analysis, drawing on evidence from implementation science, participatory evaluation, global health systems research, and the author’s experience leading AI datathons across more than 40 countries. These misconceptions include conflating benchmark performance with population health impact; privileging elite, English-language clinical knowledge; overstating the influence of clinical AI relative to social and structural determinants of health; treating diagnostic delay as a standalone problem rather than a symptom of health system failure; approaching co-design as a late-stage validation step rather than a foundation for development; overlooking data provenance and colonial data flows; and assuming that venture capital timelines are compatible with long-term global health impact and transformation. Together, these misconceptions illustrate how AI systems trained, evaluated, and financed in high-resource settings are often assumed to generalize to fundamentally different health systems. Throughout the essay, we center the realities of low- and middle-income country health systems and the communities they serve. We conclude with a call for community-partnered development, context-aware evaluation, provenance-centered data governance, and funding models aligned with the long time horizons required for equitable global health innovation.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.469 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.358 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.803 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.542 Zit.