Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Experience Over Explanation: Perceived Transparency in AI-Based Skin Cancer Detection
0
Zitationen
3
Autoren
2025
Jahr
Abstract
Artificial intelligence (AI) is increasingly integrated into everyday life and holds great potential for high-stakes domains such as healthcare, for example in early skin cancer detection. However, user trust remains a major barrier to adoption, prior research has largely treated explainable AI (XAI) approaches as universally applicable rather than accounting for individual differences. In this study, we investigate how three XAI formats (mechanism–modality pairings) shape user trust through perceived transparency in AI-powered skin cancer diagnostics. Using a between-subjects online experiment with 15 dermoscopic images, we show that the effect of XAI format on trust is fully mediated by perceived transparency, and this is significantly moderated by users’ AI experience. Notably, AI experience can reverse the effect, underscoring the importance of tailoring explanations to user backgrounds. These findings advance the understanding of how trust in AI can be more appropriately calibrated and provide guidance for designing personalized XAI in healthcare.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.464 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.259 Zit.
"Why Should I Trust You?"
2016 · 14.315 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.138 Zit.