OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 19:54

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The human factor in explainable artificial intelligence: clinician variability in trust, reliance, and performance

2025·4 Zitationen·npj Digital MedicineOpen Access
Volltext beim Verlag öffnen

4

Zitationen

5

Autoren

2025

Jahr

Abstract

Explainable Artificial Intelligence (XAI) is proposed as essential for high-risk applications like healthcare, where it aims to enhance user trust. However, studies often rely on automated metrics rather than user evaluation. We adapt a prototype-based XAI model for image-based gestational age (GA) estimation and evaluate its impact on trust, reliance, and performance, including a novel measure of appropriate reliance. Ten sonographers completed a 3-stage reader study assessing the XAI model's impact on GA estimates. Model predictions reduced clinician mean absolute error (MAE) from 23.5 to 15.7 days, and explanations had a further non-significant reduction to 14.3 days. However, the impact of explanations varied across participants, with some performing worse with explanations than without. Additionally, although explanations increased participant confidence, they had no significant effect on trust or reliance on the model. These counterintuitive results highlight potential pitfalls in deploying XAI, emphasising the need for human studies to capture clinician variability.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationFetal and Pediatric Neurological Disorders
Volltext beim Verlag öffnen