Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How Cultural Cognition Affects Trust and Perceived Quality of AI Explanations
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Transparent and user-centered AI systems have the potential to enhance trust and usability. This study investigates how cultural orientations influence user evaluations of AI source cues and explanation types in medical AI interfaces. Grounded in cultural cognition and dual-processing theories, we conducted a 2x2x3 mixed-model experiment with 400 participants, examining interactions between AI source cues (expert vs. peer), explanation types (global: general principles vs. local: contextual details), and cultural orientations (individualism-collectivism). Results suggested that individualism-leaning users trusted expert-appearing AI more, regardless of explanation type, whereas collectivism-leaning users reported higher trust and message quality when expert-appearing AI provided local explanations. By integrating cultural cognition into explainable AI research using a controlled factorial design, this study offers theoretically grounded and empirically robust insights for culturally adaptive medical AI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.374 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.261 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.126 Zit.