Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI in education: A multi-stakeholder approach to transparency and ethical practice
0
Zitationen
1
Autoren
2025
Jahr
Abstract
Integrating AI into education makes explainability an ethical imperative. This analysis examines competency requirements for 5 stakeholder groups – learners, educators, parents, system leaders, and developers – while evaluating capacity-building interventions, participatory design, and multi-stakeholder governance frameworks. Technical disclosure proves insufficient; sustainable implementation demands distributed responsibility through professional development, human-centered design, and collaborative governance. Three critical challenges persist: cognitive overload from complex explanations, equity gaps in interpretive capabilities, and automation bias fostering over-reliance. Effective educational AI adoption requires integrating participatory design with institutional governance to establish shared accountability across the educational ecosystem.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.310 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.