Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human-AI Interaction in Low- and Middle-Income Countries: How Local Human Factors Influence AI Development and Deployment (Preprint)
0
Zitationen
6
Autoren
2025
Jahr
Abstract
<sec> <title>UNSTRUCTURED</title> Artificial intelligence (AI) is rapidly transforming healthcare and health research, offering new opportunities for improving efficiency, accessibility, and equity. However, the ethical, societal, and regulatory challenges of AI development and deployment are particularly pronounced in low- and middle-income countries (LMICs). While existing literature often emphasises high-level ethical principles or technical frameworks, there is a notable gap in empirical, qualitative research that centers on human involvement and sociocultural dynamics throughout the AI lifecycle in LMIC contexts. This study addresses this gap by exploring the role of human involvement across the AI lifecycle, examining how cultural, societal, and governance factors influence AI perceptions and expectations in LMICs. Through 21 qualitative interviews with AI researchers and innovators across MENA, Africa, Latin America, and Asia, we identified five key themes: (1) the necessity of human oversight and the readiness required to support it, (2) the need for AI ethics training, (3) the importance of developing AI systems tailored to local realities, (4) the role of human-centered AI governance, and (5) the value of securing multidisciplinary teams. Findings highlight critical gaps in AI literacy, ethical governance, and interdisciplinary collaboration, emphasising that AI solutions must be co-designed with local communities to be culturally and contextually relevant. This study underscores the urgent need for participatory AI development in LMICs and calls for investment in AI education, ethical oversight, and inclusive governance frameworks to ensure that AI serves as a tool for social equity rather than exclusion. </sec>
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.