OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 19.03.2026, 21:06

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

XAIUI: User Belief-Driven Explainable AI for Context-Aware Adaptive Interfaces

2025·0 Zitationen·ACM Transactions on Interactive Intelligent Systems
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

Explainable AI (XAI) offers solutions to the challenges of predictability and interpretability in adaptive interfaces, particularly in Augmented Reality (AR) systems that dynamically adapt information based on situational contexts. While traditional XAI methods highlight contextual factors influencing adaptations, they often overlook the user’s internal understanding, such as their expertise and contextual perceptions. This omission can result in explanations that feel redundant or obvious. We present XAIUI, a computational approach that generates tailored explanations by integrating the system’s adaptation model with a Bayesian model of the user’s internal representation. Two online studies evaluated XAIUI. In the first study (N = 77), participants ranked XAIUI ’s explanations as most preferred compared to four ablations ( \(\chi^{2}(4)=62.28, {\textrm{p}} < 0.001\) ). In the second study (N = 110), XAIUI ’s explanations were rated significantly less complex ( \(\chi^{2}(4)=840.855, {\textrm{p}} < 0.001\) ) than all ablations, except showing no explanation. Our results demonstrate XAIUI ’s ability to deliver user-centric, concise, and intuitive explanations, highlighting its potential to enhance AI-driven interfaces.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationEmotion and Mood Recognition
Volltext beim Verlag öffnen