Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Conceptualizing Clinicians’ Trust in Artificial Intelligence as a Function of Their Expertise, Workload, Patient Outcome, Diagnosis Difficulty, and AI Accuracy: A Systems Thinking Approach
2
Zitationen
4
Autoren
2025
Jahr
Abstract
This study applies a systems thinking approach to examine the complex dynamics influencing clinicians’ trust in artificial intelligence (AI). We propose a conceptual model that maps how trust in AI evolves in response to key factors such as workload, diagnostic difficulty, AI accuracy, patient outcomes, prior trust in AI, and user expertise. The interrelationships among these variables were synthesized from existing literature to explore hypothetical scenarios, rather than to predict specific outcomes. The data used in this study are assumed to be tabular in nature, with analytical challenges arising from inherent uncertainties affecting both humans and AI. Using simulation-based analysis, we observed that clinicians’ trust in AI decreases as diagnostic difficulty—both for the AI system and the human clinician—and clinical workload increase. Interestingly, trust remains relatively stable under rising workload when diagnostic tasks are simple but significantly declines when tasks are complex. Clinician expertise emerged as a critical moderating factor. Experienced clinicians tend to maintain higher trust levels in AI systems, even under challenging conditions, compared to their less experienced counterparts. This may be attributed to their enhanced ability to detect AI errors, allowing for more calibrated and resilient trust. These insights highlight the importance of designing AI systems and training interventions that support appropriate trust calibration, especially in high-pressure clinical environments. Our findings suggest that a one-size-fits-all approach to fostering trust in AI may be insufficient. Instead, targeted strategies that account for diagnostic complexity and clinician expertise are essential. This work contributes to the broader understanding of trust dynamics in clinical AI adoption and offers practical guidance for developers and healthcare organizations seeking to optimize human-AI collaboration.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.214 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.071 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.429 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.418 Zit.