Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The role of domain expertise in trusting and following explainable AI decision support systems
83
Zitationen
3
Autoren
2021
Jahr
Abstract
Although the roots of artificial intelligence (AI) stretch back some years, it currently flourishes in research and practice. However, AI deals with trust issues. One possible solution approach is making AI explain itself to its user, but it is still unclear how an AI can accomplish this in decision-making scenarios. This study focuses on how a user’s expertise influences trust in explainable AI (XAI) and how this influences behaviour. To test our theoretical assumptions, we develop an AI-based decision support system (DSS), observe user behaviour in an online experiment, complemented with survey data. The results show that domain-specific expertise negatively affects trust in AI-based DSS. We conclude that the focus on explanations might be overrated for users with low domain-specific expertise, whereas it is vital for users with high expertise. Investigating the influence of expertise on explanations of an AI-based DSS, this study contributes to research on XAI and DSS.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.