Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable artificial intelligence in cognitive learning psychology: A psychometric meta-analysis
0
Zitationen
1
Autoren
2026
Jahr
Abstract
The purpose of this study is to examine explainable artificial intelligence (XAI) in cognitive psychology within higher education institutions. Data were retrieved from Scopus, Web of Science, and PsycINFO (1 January 2015–30 October 2025) in accordance with the PRISMA 2020 guidelines. Our initial search identified 3426 records. Of these, 1936 papers were excluded, and 616 studies underwent full-text review. In the final stage, 188 studies met the selection criteria and included 62,000 participants from higher education institutions worldwide. Consistency was demonstrated by a mean R 2 of 0.79 ( p < 0.001), as revealed by a science mapping analysis of three clusters—cognitive load and neural efficiency (C = 0.82, D = 0.76), attention and working memory (C = 0.64, D = 0.58), and metacognition and affective interaction (C = 0.42, D = 0.39)—which indicates positive pooled correlations ( r = 0.27–0.36). Model fit diagnostics were satisfactory ( CFI = 0.96 ; RMSEA = 0.042 ), with moderate heterogeneity observed ( I 2 ≈ 58 % ), and a global hypermean ( μ 0 ) centered on a positive association ( r ≈ 0.32 ). The psychometric meta-analytic model showed that the posterior distributions across XAI in the cognitive psychology domain were as follows: μ = 0.33 ≈ r = 0.32 , 95 % CrI 0.27,0.38 ; σ μ = 0.05 0.01,0.12 ; τ ∼ = 0.11 . The findings indicated that XAI demonstrated validity within cognitive psychology, structured across three principal domains: cognitive load and neural efficiency, attention and working memory, and metacognition and affective interaction, within adaptive learning systems in higher education.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 21.050 Zit.
Generative Adversarial Nets
2023 · 19.896 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.381 Zit.
"Why Should I Trust You?"
2016 · 14.789 Zit.
Generative adversarial networks
2020 · 13.381 Zit.