OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.03.2026, 04:18

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

The Influence of Explainable AI on User Trust and Cognitive Load: Implications for Learning Outcomes in Digital Education

2026·0 Zitationen·Review of Applied Management and Social SciencesOpen Access
Volltext beim Verlag öffnen

0

Zitationen

4

Autoren

2026

Jahr

Abstract

The present study examined the influence of Explainable Artificial Intelligence (XAI) on user trust, cognitive load, and learning outcomes in digital education environments. With the rapid integration of AI technologies in online learning systems, understanding how transparency in AI decision-making affects learners’ perceptions and performance has become increasingly important. This study adopted a quantitative research design and utilized a cross-sectional survey method to collect data from 247 university students enrolled in higher education institutions, including two universities from Punjab and one university from Karachi, Pakistan. The explainable AI, user trust, cognitive load, and learning outcomes of the students actively using digital learning platforms were measured using a structured questionnaire. The results of the collected data analyzed were done using descriptive statistics, correlation analysis, multiple regression, and chi-square tests. The findings showed that Explainable AI was significantly positively correlated with user trust (r = 0.64, p < 0.01), which implies that AI system transparency increases the levels of confidence of learners in digital learning technologies. Also, a regression analysis showed that Explainable AI led to a considerable decrease in cognitive load (b = -0.45, p < 0.001), which means that clear explanations contribute to the fact that learners interpret AI-generated results better and avoid spending extra mental energy. Moreover, chi-square tests revealed significant correlations between user trust and learning outcomes (kh2 = 21.84, p < 0.05) and cognitive load and learning outcomes (kh2 = 18.27, p < 0.05). These results indicate that the better trust and less cognitive load leads to performance improvement in AI-based learning settings. In general, this research paper demonstrates that explainable AI features should be incorporated into digital education platforms to include a higher level of transparency, build on user trust, reduce cognitive load, and, ultimately, increase the learning outcomes.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Explainable Artificial Intelligence (XAI)Artificial Intelligence in Healthcare and EducationAI in Service Interactions
Volltext beim Verlag öffnen