Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The human factor in explainable artificial intelligence: clinician variability in trust, reliance, and performance
4
Zitationen
5
Autoren
2025
Jahr
Abstract
Explainable Artificial Intelligence (XAI) is proposed as essential for high-risk applications like healthcare, where it aims to enhance user trust. However, studies often rely on automated metrics rather than user evaluation. We adapt a prototype-based XAI model for image-based gestational age (GA) estimation and evaluate its impact on trust, reliance, and performance, including a novel measure of appropriate reliance. Ten sonographers completed a 3-stage reader study assessing the XAI model's impact on GA estimates. Model predictions reduced clinician mean absolute error (MAE) from 23.5 to 15.7 days, and explanations had a further non-significant reduction to 14.3 days. However, the impact of explanations varied across participants, with some performing worse with explanations than without. Additionally, although explanations increased participant confidence, they had no significant effect on trust or reliance on the model. These counterintuitive results highlight potential pitfalls in deploying XAI, emphasising the need for human studies to capture clinician variability.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.