Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Designing AI-Based Work Processes: How the Timing of AI Advice Affects Diagnostic Decision Making
14
Zitationen
4
Autoren
2025
Jahr
Abstract
Although clinical artificial intelligence (AI) systems can augment medical diagnosis decisions by providing competent second opinions, how to effectively integrate AI into routine diagnostic processes, such as when to present AI advice to human physicians, remains largely unexplored. Therefore, our research experimentally examines how the timing of AI advice affects diagnostic decision making using a think-aloud approach. Physicians perform medical diagnoses under three conditions: ex post advice (AI advice given after an initial diagnosis), ex ante advice (AI advice given concurrently with clinical information), and a control condition (no AI advice). Our results indicate that the timing of AI advice significantly affects diagnostic accuracy and calibration, with the ex post advice condition yielding the best performance and the control condition the worst. We then conduct several analyses to disentangle the underlying mechanism. We reveal that the superior diagnostic quality in the ex post advice condition can be attributed to more thorough clinical information processing and more active cognitive engagement with AI’s reasoning rationale. As a result, participants in the ex post advice condition are more capable of differentiating correct from incorrect AI advice than those in the ex ante advice condition. Additionally, they benefit more from high-quality AI advice that contradicts their initial diagnoses. To gain additional insights, we estimate the heterogeneous treatment effects based on physician and clinical case characteristics. Our findings underscore the importance of presenting AI advice at appropriate times during routine diagnostic processes to achieve successful decision augmentation with AI advice. This paper was accepted by Anindya Ghose, information systems. Funding: This work was supported by the National University of Singapore [Grants Dean Strategic Fund - Health Informatics (HIIOT)/E and NSCP/ N-171-000-499-001] and the National Natural Science Foundation of China [Grant 72301279]. Supplemental Material: The online appendix and data files are available at https://doi.org/10.1287/mnsc.2022.01454 .
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.