OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 24.03.2026, 01:58

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

ChatXplain: Interpretable Explanations for Intelligent Assistants via Modular Rationale and Saliency

2026·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

6

Autoren

2026

Jahr

Abstract

Large Language Model (LLM)-based assistants increasingly support high-stakes decision workflows, yet their opaque reasoning processes limit user trust, hinder debugging, and complicate compliance. Existing explanation techniques—such as SHAP, LIME, and attention-based attribution—struggle to provide coherent, multi-turn dialogue-level interpretability while meeting production latency constraints. This paper introduces ChatXplain, a modular framework that generates real-time, interpretable explanations for LLM-driven conversational systems without modifying underlying model weights. The framework integrates four lightweight components—an intent classifier, a rationale generator, a saliency visualizer, and a dialogue tracker with auditable reasoning traces—designed to operate as an auxiliary layer atop any LLM. We provide full architectural details, input–output specifications, and reproducibility guidelines, along with quantitative comparisons against SHAP and LIME across two domains: customer service and financial advisory. Experiments on 1,800 multi-turn dialogues generated from 218 simulated user profiles, designed to mimic realistic interaction patterns, show that ChatXplain improves user trust proxies by 22%, explanation clarity by 19%, and developer debugging efficiency by 31%, while adding only 8.7% median latency overhead. We further introduce the ChatXplain Score, a combined semantic-fidelity and human-interpretability metric for evaluating dialogue explanations. Results show that the proposed approach delivers significantly more consistent and contextually aligned explanations than baseline attribution methods, establishing ChatXplain as a practical and reproducible framework for deploying interpretable intelligent assistants in real-world environments.

Ähnliche Arbeiten