OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 27.03.2026, 00:36

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Remember Non-Specialists? How effective are XAI explanations in helping non-specialists understand an AI model’s decision?

2025·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2025

Jahr

Abstract

Explainable Artificial Intelligence (XAI) can uncover the inner workings of black-box models, enhancing transparency and building trust in AI-driven decision-making. However, there is ongoing debate regarding the effectiveness of XAI explanations specifically, whether they are understandable to users lacking technical knowledge, have low digital literacy or the confidence of a user to question an automated decision based on an AI model outcome. To address these challenges, we propose adapting metrics, adopted from cognitive psychology’s Mental Model approach to assess non-specialist (non-technical) participants’ understanding of two different types (SHAP, example-based) of XAI explanation. Utilizing a Healthcare scenario, we create a random forest model to classify a cancer diagnosis and create a series of explanation types. This paper presents a study to evaluate the effectiveness of these explanation types using metrics including understanding, trust, and perceived usefulness with non-specialist users. The results show that non-specialist users who had received one training session in SHAP trusted the SHAP explanation more than the example-based explanation, however 81% of participants thought that example-based explanations were more useful.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationBiomedical Text Mining and Ontologies
Volltext beim Verlag öffnen