Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Reflexive Dialogue-Based Explainability for Human-AI Collaboration: An Empirical Study on Adaptive and Interactive Explanations
0
Zitationen
4
Autoren
2025
Jahr
Abstract
As artificial intelligence (AI) systems are increasingly integrated into decision-making processes, the need for transparency and user-aligned explanations has become critical. While traditional explainability methods – static or post-hoc – have advanced, they often fail to adapt to users’ evolving informational needs during real-time interactions, limiting their effectiveness in collaborative contexts. This paper addresses this gap by empirically evaluating reflexive dialogue-based explanations, which dynamically adjust explanatory content through iterative user-AI exchanges. We conducted a mixed-method study involving 80 participants performing complex decision-support tasks under two conditions: static explanations and reflexive dialogue. Quantitative results demonstrate that reflexive dialogue significantly improves task accuracy, comprehension, and trust calibration, while qualitative findings reveal enhanced user engagement, perceived agency, and satisfaction. The study identifies key interaction patterns that support cognitive integration and trust refinement. Our main contribution lies in validating a human-centered, adaptive explanation framework that goes beyond one-size-fits-all transparency. The novelty of this work lies in positioning reflexive dialogue not merely as a user support feature but as an essential mechanism for dynamically co-constructing meaning in human-AI collaboration.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.