Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Systematic Inequities in Australian Health Workforce, Digital Access, and Service Utilisation: Implications for Artificial Intelligence Deployment in Public Health
0
Zitationen
1
Autoren
2026
Jahr
Abstract
Artificial intelligence (AI) is often proposed as a solution to rural and remote health workforce shortages. However, when AI systems are trained on historical service utilisation data shaped by cost, cultural, and logistical barriers, they risk misinterpreting constrained access as low demand and perpetuating inequities rather than reducing them. This research monograph quantifies inequities in (i) health workforce distribution, (ii) digital infrastructure, and (iii) health service utilisation for Aboriginal and Torres Strait Islander peoples and remote communities, using Australian government datasets spanning 2013–2025. Findings identify a 72% specialist workforce deficit in remote areas, a 21‑point digital inclusion gap in Very Remote communities, and significant access barriers including cost (41.6%), cultural safety concerns (25.6%), and logistical constraints (24.9%). Combined through a multiplicative model, these inequities reduce effective access for Indigenous Very Remote populations to approximately 12% of the urban non‑Indigenous baseline. The analysis demonstrates how AI systems can encode “prophetic surveillance”—forecasting the continuation of structural barriers—and reveals how representation bias, proxy variables, and infrastructure mismatches produce high‑risk deployment environments. This monograph provides an empirical foundation for AI governance in Australian health systems, offering a structured framework for evaluating equity risks, preventing algorithmic harm, and prioritising community‑led models and structural reforms over technologically driven substitutions.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.508 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.393 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.864 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.564 Zit.