Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
XAI Effect on Laypeople VS Experts Perceptions of AI Outcomes - Preliminary Work
0
Zitationen
2
Autoren
2026
Jahr
Abstract
Artificial Intelligence (AI) systems are increasingly integrated into high-stakes domains that affect daily life. Explainable Artificial Intelligence (XAI) methods are essential for understanding how these systems make decisions, particularly when using complex "black box" models. XAI methods are diverse, and we don’t fully understand how different explanations work for different people in different contexts. Hence, this study presents preliminary results of work in process that aims to investigate the effect of different XAI methods (SHAP, LIME, and DiCE) with respect to different stakeholder types (laypeople vs. experts) and system domains (e.g., HR, Legal, Medical, Entertainment). This paper presents the initial results of our first case study, which focuses on an AI-based recruitment system.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.326 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.218 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.111 Zit.