OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 15:37

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Conceptualizing Clinicians’ Trust in Artificial Intelligence as a Function of Their Expertise, Workload, Patient Outcome, Diagnosis Difficulty, and AI Accuracy: A Systems Thinking Approach

2025·2 Zitationen·IEEE AccessOpen Access
Volltext beim Verlag öffnen

2

Zitationen

4

Autoren

2025

Jahr

Abstract

This study applies a systems thinking approach to examine the complex dynamics influencing clinicians’ trust in artificial intelligence (AI). We propose a conceptual model that maps how trust in AI evolves in response to key factors such as workload, diagnostic difficulty, AI accuracy, patient outcomes, prior trust in AI, and user expertise. The interrelationships among these variables were synthesized from existing literature to explore hypothetical scenarios, rather than to predict specific outcomes. The data used in this study are assumed to be tabular in nature, with analytical challenges arising from inherent uncertainties affecting both humans and AI. Using simulation-based analysis, we observed that clinicians’ trust in AI decreases as diagnostic difficulty—both for the AI system and the human clinician—and clinical workload increase. Interestingly, trust remains relatively stable under rising workload when diagnostic tasks are simple but significantly declines when tasks are complex. Clinician expertise emerged as a critical moderating factor. Experienced clinicians tend to maintain higher trust levels in AI systems, even under challenging conditions, compared to their less experienced counterparts. This may be attributed to their enhanced ability to detect AI errors, allowing for more calibrated and resilient trust. These insights highlight the importance of designing AI systems and training interventions that support appropriate trust calibration, especially in high-pressure clinical environments. Our findings suggest that a one-size-fits-all approach to fostering trust in AI may be insufficient. Instead, targeted strategies that account for diagnostic complexity and clinician expertise are essential. This work contributes to the broader understanding of trust dynamics in clinical AI adoption and offers practical guidance for developers and healthcare organizations seeking to optimize human-AI collaboration.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationHuman-Automation Interaction and SafetyEthics and Social Impacts of AI
Volltext beim Verlag öffnen