Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From Models to Systems: A Survey of Explainability for Tool-Augmented Language Models and AI Agents
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Large language models (LLMs) are increasingly being used as part of complex agentic systems that orchestrate the use of external tools, such as retrieval mechanisms or code interpreters. In this survey, we argue that this development necessitates a rethinking of the goals of explainable artificial intelligence (XAI): Rather than focusing on providing users with explanations for monolithic machine learning models, we need system-level explanations that also provide information about which and how tools are used, as well as how external execution traces causally influence system behavior. We provide an overview of the existing methods in explainable AI and discuss the limitations of monolithic XAI methods in agentic contexts. Finally, we highlight open challenges in providing faithful explanations for LLM-based systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.968 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.360 Zit.
"Why Should I Trust You?"
2016 · 14.714 Zit.
Generative adversarial networks
2020 · 13.338 Zit.