OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 07.04.2026, 15:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Explainable AI in healthcare: Factors influencing medical practitioners’ trust calibration in collaborative tasks

2024·1 Zitationen·Proceedings of the ... Annual Hawaii International Conference on System Sciences/Proceedings of the Annual Hawaii International Conference on System SciencesOpen Access
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2024

Jahr

Abstract

Artificial intelligence is transforming clinical decision-making processes by using patient data for improved diagnosis and treatment.However, the increasing black box nature of AI systems presents comprehension challenges for users.To ensure the safe and efficient utilization of these systems, it is essential to establish appropriate levels of trust. Accordingly, this study aims to answer the following research question: What factors influence medical practitioners' trust calibration in their interactions with AI-based clinical decision support systems (CDSSs)?Applying an exploratory approach, the data is collected through semi-structured interviews with medical and AI experts, and is examined through qualitative content analysis.The results indicate that perceived understandability, technical competence and reliability of the system, along with other userand context-related factors, impact physicians' trust calibration in AI-based CDSSs.As there is limited literature on this specific topic, our findings provide a foundation for future studies aiming to delve deeper into this field.

Ähnliche Arbeiten

Autoren

Themen

Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen