Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI (XAI) in Practice: Users’ Perceptions of Transparency and Understanding in Automated Decision Systems
0
Zitationen
3
Autoren
2026
Jahr
Abstract
The research explores user perceptions of transparency and interpretability of explainable artificial intelligence (XAI) systems and the quality of the given explanations to understand how trust, understanding, and ethical judgment of automated decisions are formed. Using a qualitative design, 16 participants with different professional experiences were interviewed using semi-structured questions regarding their experience with AI-based decision systems. The data were evaluated using manual thematic analysis in order to determine major patterns and the meaning of narrative in the stories of the participants. The participants underlined the importance of transparency that should be explained in a clear way, and should have a contextual and defensive quality. Clarity and active participation were encouraged through clear communication, whereas the lack of clarity and excessive technicality resulted in confusion and doubt. The research discovered that transparency is seen not just as a technical aspect but as a relationship and moral construct that is linked to fairness and respect for the autonomy of users. The results reveal the significance of conceptualising the XAI systems by focusing on the user interpretability, ethics responsibility, and communicative effectiveness. Clear descriptions of the ways people should be capable of closing the divide between AI logic and human logic to boost public confidence in AI. The study concludes that explainability ought not only to be treated as a form of afterthought in AI design, but also as a pillar of technology innovation that is human-centred.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.299 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.198 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.098 Zit.