Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explained or Certified? Examining the Influence of XAI and AI-Seals on Users’ Trust and Understanding
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Explainable artificial intelligence (XAI) aims to increase users’ understanding and trust in AI-powered systems. However, while XAI has shown benefits, it also comes with shortcomings such as shifting the cognitive burden to users. Certifications, so-called AI-Seals, pose an alternative (or addition) to XAI by signaling a system's trustworthiness through a label. Despite their potential, empirical investigations of AI-Seals remain scarce. In an online experiment (N = 436), a 2 (XAI vs. no XAI) x 2 (AI-Seal vs. no AI-Seal) between-subject design was conducted to analyze the effects of XAI, AI-Seals, or a combination of both on users’ perceived and factual understanding and trust in the AI-application. Results show that XAI led to more perceived and factual understanding but had no effect on trust. Against our assumptions, an AI-Seal could neither promote perceived understanding nor trust. Likewise, the combination did not increase trust but has potential to increase factual understanding.
Ähnliche Arbeiten
The global landscape of AI ethics guidelines
2019 · 4.495 Zit.
The Limitations of Deep Learning in Adversarial Settings
2016 · 3.853 Zit.
Trust in Automation: Designing for Appropriate Reliance
2004 · 3.372 Zit.
Fairness through awareness
2012 · 3.265 Zit.
Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer
1987 · 3.182 Zit.