OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 13.03.2026, 06:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Investigating the Role of AI Explanations in Lay Individuals’ Comprehension of Radiology Reports: A Metacognition Lense

2025·0 ZitationenOpen Access
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Much research has focused on advancing techniques for explainable artificial intelligence (XAI) to improve the utility of AI recommendations. However, the metacognitive processes involved in interacting with AI explanations have not been fully explored. In this study, we examine the effects of AI explanations on human decisions from the perspective of cognitive mechanisms that evaluate the correctness of AI recommendations. To accomplish this, we conducted a large-scale, between-subject experiment (N=4,302) on Amazon Mechanical Turk, during which each participant was asked to classify a radiology report as describing a normal or abnormal finding. The participants were randomly assigned into three different groups: a) without accompanying AI input (control group,) b) with AI prediction only, and c) with AI prediction and AI explanation. Our results show that AI explanations improved the overall task performance. We hypothesize that explanations help decision-makers better evaluate their intuitions about their decisions—a process known as self-monitoring—and, as such, overcome their cognitive limitations and compensate for machine prediction errors. Additionally, our results show that explanations are more effective when AI prediction confidences are high or users' self-confidence is low. We conclude this paper by discussing the theoretical and practical implications of our findings.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen