Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Evaluating the Impact of Artificial Intelligence Technologies on Operational Efficiency in Uganda’s Health Sector: A Multi-Level Institutional Analysis
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Background Artificial Intelligence (AI) is increasingly promoted as a transformative solution to healthcare inefficiencies, particularly in low- and middle-income countries (LMICs). Yet, most AI health integration models are derived from high-income contexts, leaving critical gaps in understanding how these technologies perform within structurally constrained systems such as Uganda’s. This study addresses that gap by investigating how AI influences operational efficiency across Uganda’s health sector using a multi-level institutional lens. Objectives The study aimed to: (1) assess the status and extent of AI adoption across Uganda’s public and private health sectors; (2) evaluate how AI influences operational efficiency metrics such as turnaround time, workload, and data management; (3) identify key enablers and barriers to AI integration at policy, organisational, and frontline levels; and (4) propose a context-sensitive framework to guide responsible AI implementation. Methods Guided by a Critical Realist paradigm and socio-technical systems (STS) theory, this qualitative study employed a multi-level institutional analysis across Macro (policy), Meso (organisational), and Micro (frontline) levels. Data were collected from 263 participants through 28 in-depth interviews and 45 focus group discussions (FGDs) conducted across 19 purposively selected healthcare institutions. Supplementary document analysis was conducted on policy texts and strategic plans. Thematic analysis with cross-level triangulation was used to map patterns of adoption, resistance, and adaptation. Results At the Macro level , although AI is referenced in Uganda’s National Digital Health Strategy (2020-2025), regulatory frameworks remain vague, with fragmented governance and limited accountability for algorithmic risk or consent management. At the Meso level , institutional readiness varied significantly. Facilities with committed leadership, stable infrastructure, and interoperable systems reported improved diagnostic turnaround and workflow efficiency. Others, constrained by poor connectivity, siloed operations, and untrained staff, exhibited tool underutilisation or abandonment. At the Micro level , frontline clinicians expressed cautious openness to AI tools. Adoption was highest when systems were mobile-compatible, aligned with existing routines, and supported by iterative, inclusive training. However, trust was undermined by “black box” opacity, ethical ambiguities, and digital literacy disparities especially among female healthcare workers in rural areas. Discussion The study demonstrates that AI functions less as a stand-alone innovation and more as an amplifier of existing institutional strengths and weaknesses. Where leadership, digital infrastructure, and epistemic alignment were strong, AI bolstered efficiency. Where these were lacking, AI reproduced fragmentation and inequity. Importantly, the study reveals institutional misalignments: policy enthusiasm not matched by operational capacity, or regulatory aspirations not reflected in clinical workflows. These findings challenge dominant techno-optimist narratives and highlight the need for co-designed, contextually validated AI solutions in LMICs. Conclusion AI can support operational efficiency in Uganda’s healthcare sector, but only when embedded within ethically responsive, context-aware, and systemically aligned institutional frameworks. To move from isolated pilots to scalable, equitable impact, Uganda must establish enforceable governance, foster organisational preparedness, and prioritise human-centred design. The study proposes a multi-level roadmap for responsible AI integration, grounded in empirical evidence and theoretical insight, and contributes to the global discourse on equitable digital health transformation in LMICs.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.