Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring the Impact of Explainability in Large Language Model (LLM) Applications on User Experience
1
Zitationen
5
Autoren
2025
Jahr
Abstract
Due to the "black-box" nature, explainability has long been a significant research topic in machine learning. Researchers have been committed to explaining model principles behind models and their scope of influence and decision-making to experts and technical practitioners. However, with the increasing popularity of the Large Language Model (LLM), more general users interact with these applications, bringing new challenges for explainability. This study explores the impact of LLM explainability on trust and satisfaction, revealing that both are significantly influenced by the degree and presentation of explainability. Moreover, trust and satisfaction vary across different risk scenarios. The study further evaluates the pros and cons of different explainability strategies, offering practical insights for the design of LLM applications.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.366 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.244 Zit.
"Why Should I Trust You?"
2016 · 14.255 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.122 Zit.