Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Human-AI Collaboration in Explainable Recommender Systems: An Exploration of User-Centric Explanations and Evaluation Frameworks
4
Zitationen
1
Autoren
2023
Jahr
Abstract
Explainable Recommender Systems (XRS) have emerged as a transformative technology that bridges the gap between recommendation accuracy and transparency, providing users with understandable explanations for the AI-driven suggestions. This research paper delves into the critical aspect of Human-AI Collaboration in XRS, aiming to enhance user understanding, trust, and satisfaction in the recommendation process. The paper begins by investigating the dynamics of collaboration between users and AI algorithms within the context of XRS. It explores the intricate interaction between users' preferences and cognitive processes and the explanations generated by the system. This exploration forms the foundation for developing user-centric explanations that cater to individual comprehension levels and preferences. Various techniques for facilitating Human-AI Collaboration are examined, including model-based explanations, post-hoc approaches, interactive interfaces, and hybrid methods. These techniques empower users to interact with the XRS, customize explanations, and gain insights into the recommendation process. Addressing challenges in the implementation of Human-AI Collaboration, the paper explores interpreting complex AI models, balancing explanation simplicity with comprehensiveness, and ensuring user trust and system adoption. Ethical considerations regarding user privacy and fairness are also discussed. To enable user-driven explanation generation, the paper proposes strategies for empowering users to personalize the explanation process, select preferred explanation styles, and provide contextual information. Such user-driven approaches foster a more transparent and collaborative relationship between users and XRS. In order to evaluate the effectiveness of Human-AI Collaborative XRS, the paper introduces a comprehensive evaluation framework. This framework includes metrics for explanation quality, user understanding, satisfaction, trust, and engagement. The results of this evaluation can guide further improvements in XRS, ensuring the delivery of transparent, user-centric, and trustworthy recommendations. The findings of this research contribute to advancing XRS technology and serve as a foundation for future investigations into the collaborative nature of recommendation systems. By fostering collaboration between humans and AI, we can design recommender systems that empower users, promote user satisfaction, and facilitate informed decision-making in various real-world scenarios.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.464 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.259 Zit.
"Why Should I Trust You?"
2016 · 14.315 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.138 Zit.