Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
XAIUI: User Belief-Driven Explainable AI for Context-Aware Adaptive Interfaces
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Explainable AI (XAI) offers solutions to the challenges of predictability and interpretability in adaptive interfaces, particularly in Augmented Reality (AR) systems that dynamically adapt information based on situational contexts. While traditional XAI methods highlight contextual factors influencing adaptations, they often overlook the user’s internal understanding, such as their expertise and contextual perceptions. This omission can result in explanations that feel redundant or obvious. We present XAIUI, a computational approach that generates tailored explanations by integrating the system’s adaptation model with a Bayesian model of the user’s internal representation. Two online studies evaluated XAIUI. In the first study (N = 77), participants ranked XAIUI ’s explanations as most preferred compared to four ablations ( \(\chi^{2}(4)=62.28, {\textrm{p}} < 0.001\) ). In the second study (N = 110), XAIUI ’s explanations were rated significantly less complex ( \(\chi^{2}(4)=840.855, {\textrm{p}} < 0.001\) ) than all ablations, except showing no explanation. Our results demonstrate XAIUI ’s ability to deliver user-centric, concise, and intuitive explanations, highlighting its potential to enhance AI-driven interfaces.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.