Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Fostering Trust and Engagement in AI-Powered Corporate Learning: Investigating the Role of Explainability
0
Zitationen
4
Autoren
2025
Jahr
Abstract
AI-powered learning platforms promise personalized upskilling, yet often face employee distrust and low uptake. Grounded in trust, engagement, and explainable AI (XAI) literature, this research-in-progress examines how alternative explanation designs influence users’ trust and behavioral engagement with a corporate learning recommender. Using a Design Science Research process, we are developing a hybrid recommendation engine and an interface that provides feature-based and counterfactual explanations. A two-week field experiment with about 30 knowledge-workers (two explainable vs. one baseline conditions) is planned to measure post-study trust, enrolment and completion rates, and collect qualitative feedback. Expected contributions include empirically validated design principles for providing explanations, deeper insight into the trust, engagement nexus in workplace learning, and practitioner guidance for explainable, employee-centric AI deployment. By extending XAI scholarship to corporate Learning & Development, the study addresses an identified research gap and supports responsible AI adoption in organizations.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.615 Zit.
Generative Adversarial Nets
2023 · 19.894 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.306 Zit.
"Why Should I Trust You?"
2016 · 14.446 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.171 Zit.