Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Influence of Explainable AI on User Trust and Cognitive Load: Implications for Learning Outcomes in Digital Education
0
Zitationen
4
Autoren
2026
Jahr
Abstract
The present study examined the influence of Explainable Artificial Intelligence (XAI) on user trust, cognitive load, and learning outcomes in digital education environments. With the rapid integration of AI technologies in online learning systems, understanding how transparency in AI decision-making affects learners’ perceptions and performance has become increasingly important. This study adopted a quantitative research design and utilized a cross-sectional survey method to collect data from 247 university students enrolled in higher education institutions, including two universities from Punjab and one university from Karachi, Pakistan. The explainable AI, user trust, cognitive load, and learning outcomes of the students actively using digital learning platforms were measured using a structured questionnaire. The results of the collected data analyzed were done using descriptive statistics, correlation analysis, multiple regression, and chi-square tests. The findings showed that Explainable AI was significantly positively correlated with user trust (r = 0.64, p < 0.01), which implies that AI system transparency increases the levels of confidence of learners in digital learning technologies. Also, a regression analysis showed that Explainable AI led to a considerable decrease in cognitive load (b = -0.45, p < 0.001), which means that clear explanations contribute to the fact that learners interpret AI-generated results better and avoid spending extra mental energy. Moreover, chi-square tests revealed significant correlations between user trust and learning outcomes (kh2 = 21.84, p < 0.05) and cognitive load and learning outcomes (kh2 = 18.27, p < 0.05). These results indicate that the better trust and less cognitive load leads to performance improvement in AI-based learning settings. In general, this research paper demonstrates that explainable AI features should be incorporated into digital education platforms to include a higher level of transparency, build on user trust, reduce cognitive load, and, ultimately, increase the learning outcomes.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.305 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.236 Zit.
"Why Should I Trust You?"
2016 · 14.204 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.103 Zit.