Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From Data to Pedagogy: Leveraging Explainable Artificial Intelligence to Enhance Trust, Transparency, and Effectiveness in Intelligent Learning Systems
0
Zitationen
2
Autoren
2025
Jahr
Abstract
The increasing integration of artificial intelligence into educational technologies has amplified concerns regarding transparency, trust, and pedagogical validity, particularly as complex machine learning models are deployed in high-stakes learning contexts. While contemporary intelligent learning systems demonstrate strong predictive capabilities, their black-box nature often limits educator trust and constrains meaningful instructional use. This study addresses this gap by proposing and empirically evaluating a human-centered framework that embeds explainable artificial intelligence (XAI) into intelligent learning systems to bridge the divide between data-driven prediction and pedagogical decision-making. The proposed architecture integrates instance-level explainability mechanisms—such as SHAP and ceteris-paribus analyses—with predictive models and an interactive teacher dashboard, enabling educators to interpret, validate, and act upon AI-generated insights. Using real-world learning management system data and mixed-methods evaluation, the study demonstrates that instance-level explanations significantly enhance interpretability, reduce mispredictions, and strengthen teacher trust without compromising predictive performance. Empirical findings further indicate that explainable feedback supports targeted pedagogical interventions and contributes to measurable improvements in student outcomes. By situating explainability within theories of learning, trust, and human–AI collaboration, this work advances the design of transparent, trustworthy, and pedagogically grounded intelligent learning systems, offering practical and theoretical contributions to the evolving field of AI in education.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.436 Zit.
Generative Adversarial Nets
2023 · 19.843 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.256 Zit.
"Why Should I Trust You?"
2016 · 14.294 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.133 Zit.