Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Mobile Learning: Enhancing Trust and Transparency through HCI
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The digital transformation of education, driven by artificial intelligence (AI), has led to intelligent learning systems that personalize instruction, predict student performance, and automate assessments. However, the lack of transparency in AI-driven educational tools raises concerns about trust and user acceptance, particularly in mobile and interactive learning platforms used on-the-go by diverse users. Human-computer interaction (HCI) principles address these issues by promoting user-centered design and interpretability, aligning with pedagogical goals. Explainable AI (XAI) enhances this by making AI decisions understandable to educators and students. This study reviews the intersection of AI, HCI, and XAI in mobile learning, analyzing HCI’s role in interface design, AI methodologies in adaptive environments, and XAI techniques for transparency. Findings highlight XAI’s benefits in trust and accountability, alongside challenges like interpretability trade-offs, privacy, and mobile deployment costs. A research agenda is proposed to address these gaps, emphasizing ethical, transparent, and user-centric AI systems.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.562 Zit.
Generative Adversarial Nets
2023 · 19.892 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.298 Zit.
"Why Should I Trust You?"
2016 · 14.384 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.164 Zit.