Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Impact of Explainable AI on Teachers’ Trust and Acceptance of AI EdTech Recommendations: The Power of Domain-specific Explanations
13
Zitationen
4
Autoren
2025
Jahr
Abstract
Abstract Trust is crucial for teachers’ adoption of AI-enhanced educational technologies (AI-EdTech), yet how this trust is formed and maintained remains poorly understood. An aspect of the system design that seems profoundly related to trust is transparency, which can be achieved through explainable AI (XAI) approaches. The present study seeks to explore the dynamic nature of teachers’ trust in AI EdTech systems, how it relates to understandability, and XAI’s role in enhancing it. Building upon Hoff and Bashir’s ‘trust in automation’ model (2015), we propose a theoretical model that connects these factors. We validated the applicability of the proposed model to AI in Education context using a mixed-method, within-subject design that measured understandability, trust, and acceptance of AI recommendations among 41 in-service chemistry teachers. The results showed a significant positive correlation between the three factors, as anticipated by the model, and demonstrated the heterogeneous understandability of different XAI schemes, with domain-driven schemes superior to data-driven ones. In addition, the study reveals two additional factors influencing teachers’ adoption of AI-EdTech: pedagogical perspectives and workload reduction potential. The study provides a theoretical explanation of how different XAI schemes impact trust through understandability. Furthermore, it emphasizes the need for greater attention to XAI, which fosters trust and facilitates the acceptance of AI-EdTech.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.253 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.230 Zit.
"Why Should I Trust You?"
2016 · 14.156 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.093 Zit.