Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable artificial intelligence-machine learning models to estimate overall scores in tertiary preparatory general science course
11
Zitationen
10
Autoren
2024
Jahr
Abstract
Educational data mining is valuable for uncovering latent relationships in educational settings, particularly for predicting students' academic performance. This study introduces an interpretable hybrid model, optimised through Tree-structured Parzen Estimation (TPE) and Support Vector Regression (SVR), to predict overall scores (OT) utilising five assignments and one examination mark as predictors. Neural Network-based, Tree-Based, Ensemble-Based, and Boosting-based methods are evaluated against the hybrid TPE-optimised SVR model for forecasting final examination grades among 492 students enrolled in the TPP7155 (General Science) course at the University of Southern Queensland, Australia, during the 2020-2021 academic year. Additionally, Local Interpretable Model-agnostic Explanations (LIME) and SHapley Additive explanations (SHAP) techniques are employed to elucidate the inner workings of these prediction models. The findings highlight the superior performance of the proposed model, exhibiting the lowest Root Mean Squared Error (RMSE) and Relative Root Mean Squared Error (RRMSE), as well as the highest Willmott's index (WI), Legates–McCabe index (LM), and Nash–Sutcliffe Efficiency (NS). With assignment and examination marks identified as pivotal predictors of OT. SHAP and LIME analyses reveal the examination score (ET) as the most influential feature, impacting predicted OT by an average of ±4.93. Conversely, Assignment 1 emerges as the least informative feature, contributing merely ±0.64 to OT predictions. This research underscores the efficacy of the proposed interpretable hybrid TPE-optimised SVR model in discerning relationships among continuous learning variables, thereby empowering educators with early intervention capabilities and enhancing their ability to anticipate student performance prior to course completion.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.408 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.253 Zit.
"Why Should I Trust You?"
2016 · 14.286 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.132 Zit.