Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in healthcare: Factors influencing medical practitioners’ trust calibration in collaborative tasks
1
Zitationen
3
Autoren
2024
Jahr
Abstract
Artificial intelligence is transforming clinical decision-making processes by using patient data for improved diagnosis and treatment.However, the increasing black box nature of AI systems presents comprehension challenges for users.To ensure the safe and efficient utilization of these systems, it is essential to establish appropriate levels of trust. Accordingly, this study aims to answer the following research question: What factors influence medical practitioners' trust calibration in their interactions with AI-based clinical decision support systems (CDSSs)?Applying an exploratory approach, the data is collected through semi-structured interviews with medical and AI experts, and is examined through qualitative content analysis.The results indicate that perceived understandability, technical competence and reliability of the system, along with other userand context-related factors, impact physicians' trust calibration in AI-based CDSSs.As there is limited literature on this specific topic, our findings provide a foundation for future studies aiming to delve deeper into this field.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.400 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.261 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.695 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.506 Zit.