Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Understanding Explainability in Recommender Systems-User Insights and Perspectives
0
Zitationen
3
Autoren
2025
Jahr
Abstract
This paper presents our preliminary findings from a systematic literature review on explainability in recommender systems from a user-centered perspective. Despite extensive literature on explainability in Artificial Intelligence (XAI), this study focuses specifically on how explain ability in recommender systems affects user trust, taking into account insights drawn from design and Human-Computer Interaction (HCI). To this end, we extracted 387 journal and conference papers from ACM, IEEE, Taylor and Francis, Science Direct, and Springer. After applying inclusion and exclusion criteria, 10 relevant articles published between 2018 and 2024 were selected for this analysis. According to the results, users value justifications for recommendations to better understand why certain products or services are suggested. Moreover, scrutability is crucial for enabling users to provide feedback when recommendations do not align with their preferences. Explanations should be informative and easily understandable to enhance transparency and decision-making efficiency. The final component of fostering user trust, satisfaction, and transparency is to provide adaptive explanations, allowing users to control the level of detail based on their mental models and personal characteristics.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.