Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Investigating the role of AI explanations in lay individuals’ comprehension of radiology reports: A metacognition lens
0
Zitationen
3
Autoren
2025
Jahr
Abstract
While there has been extensive research on techniques for explainable artificial intelligence (XAI) to enhance AI recommendations, the metacognitive processes in interacting with AI explanations remain underexplored. This study examines how AI explanations impact human decision-making by leveraging cognitive mechanisms that evaluate the accuracy of AI recommendations. We conducted a large-scale experiment (N = 4,302) on Amazon Mechanical Turk (AMT), where participants classified radiology reports as normal or abnormal. Participants were randomly assigned to three groups: a) no AI input (control group), b) AI prediction only, and c) AI prediction with explanation. Our results indicate that AI explanations enhanced task performance. Our results indicate that explanations are more effective when AI prediction confidence is high or users' self-confidence is low. We conclude by discussing the implications of our findings.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.