OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 14.03.2026, 14:33

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The role of domain expertise in trusting and following explainable AI decision support systems

2021·83 Zitationen·Journal of Decision SystemOpen Access
Volltext beim Verlag öffnen

83

Zitationen

3

Autoren

2021

Jahr

Abstract

Although the roots of artificial intelligence (AI) stretch back some years, it currently flourishes in research and practice. However, AI deals with trust issues. One possible solution approach is making AI explain itself to its user, but it is still unclear how an AI can accomplish this in decision-making scenarios. This study focuses on how a user’s expertise influences trust in explainable AI (XAI) and how this influences behaviour. To test our theoretical assumptions, we develop an AI-based decision support system (DSS), observe user behaviour in an online experiment, complemented with survey data. The results show that domain-specific expertise negatively affects trust in AI-based DSS. We conclude that the focus on explanations might be overrated for users with low domain-specific expertise, whereas it is vital for users with high expertise. Investigating the influence of expertise on explanations of an AI-based DSS, this study contributes to research on XAI and DSS.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Ethics and Social Impacts of AIArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen