Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Using Artificial Intelligence as Gatekeeper or Second Opinion: Designing Patient Pathways for Artificial Intelligence Augmented Healthcare
0
Zitationen
2
Autoren
2025
Jahr
Abstract
Of the 1,247 artificial intelligence (AI) systems cleared by the U.S. Food and Drug Administration as of May 2025, many function as classifiers to help screen or diagnose specific medical conditions. Yet, questions remain about how to best integrate AI into healthcare workflows, including whether AI should serve as a gatekeeper, determining which patients require human attention, or as a second opinion to complement medical consultations. Motivated by this question, we model a healthcare system in which patients can consult a specialist, an AI system, or both. The key design question is whether the patient should first consult AI or the specialist, corresponding to AI’s gatekeeper and second-opinion roles, respectively. We model a two-step decision-making process influenced by an initial signal, or anchor. Contrary to popular belief, we show using AI as a gatekeeper does not necessarily increase missed diagnoses; using AI as a second opinion, on the other hand, can increase missed diagnoses but can also increase false positives. In general, the gatekeeper approach is preferable in low-risk settings, whereas the second-opinion approach is better suited for high-risk patients for whom avoiding missed diagnoses is a primary concern. Notably, scenarios exist where AI should not be used for intermediate-risk patients for whom uncertainty is highest, challenging the premise that AI is most useful in reducing uncertainty. Finally, applying our model to glaucoma diagnosis, we numerically illustrate cost savings from optimizing patient pathways. Our work highlights the potential for AI to contribute to the United Nations’ Sustainable Development Goals by optimizing resource allocation and improving patient outcomes.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.303 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.155 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.555 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.453 Zit.