Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The authorization lottery: contradictory AI prioritization patterns in healthcare resource allocation
1
Zitationen
5
Autoren
2025
Jahr
Abstract
Abstract Healthcare systems increasingly deploy artificial intelligence to allocate resources, including procedure authorizations that impact patient access to care. While concerns about algorithmic bias typically focus on representation of protected attributes, how AI systems approach resource-constrained decisions remains understudied. We evaluated three large language models (LLMs), ChatGPT, Claude, and DeepSeek, on their handling of simulated surgical authorization request for an identical procedure. Each model assessed 6,500 surgeon profiles while implementing a mandated 30% denial rate, mirroring real-world authorization constraints. Multivariate regression analysis quantified how each model weighted 13 standardized attributes including professional qualifications and demographic characteristics. ChatGPT assigned significantly lower authorization scores to female surgeons (-9.55 points; 95% CI: -9.98, -9.11) while Claude (+ 2.01 points; 95% CI: + 1.85, + 2.17) and DeepSeek (+ 4.03 points; 95% CI: + 3.91, + 4.15) assigned higher scores to female surgeons. Geographic biases existed, with ChatGPT heavily favoring North American surgeons (+ 18.83 points; 95% CI: + 18.00, + 19.65) and DeepSeek penalizing them (-3.95 points; 95% CI: -4.18, -3.72). In ChatGPT, demographic factors frequently outweighed clinical qualifications; geographic location impacted authorization scores more than board certification. Though all models showed high internal consistency (R 2 values 0.822–0.929), variability in prioritization of attributes resulted in divergent approval thresholds despite identical denial rates (ChatGPT: 64.6 ± 21.1, Claude: 68.5 ± 9.1, DeepSeek: 89.4 ± 9.2). We describe a phenomenon in AI healthcare decision-making which we term "constrained-resource divergence." When forced to discriminate between identical cases under resource constraints, AI systems may apply arbitrary weights that can impact patient care without clinical justification. In practice, this means patients with identical presentations may receive different authorization decisions based on which AI model their insurer deployed. Our findings raise profound questions about AI reliability for consequential healthcare decisions.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.