Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Explainable AI for Analyzing Behavioral Patterns in Serious Games: From Educational to Professional Training Contexts
0
Zitationen
9
Autoren
2025
Jahr
Abstract
This paper explores how Explainable AI (XAI) techniques, applied to machine learning models like Transformers and LSTMs, can help analyze behavioral patterns in gaming. Drawing on insights from the field of educational games for neurodivergent children - where explainability is crucial for success - We propose adapting these approaches to serious games for professional training. By capturing worker engagement and explaining performance outcomes, XAI can support not only the workers themselves but also instructors and managers in comprehending why failures occur. Our experiments with Portuguese gaming datasets show that LSTM models, combined with explainability tools like LIME, outperform Transformers with a relative gain of 7% for LSTM, when identifying nuanced emotional patterns. The obtained metrics from the validation phase are an accuracy of 0.78 for the LSTM model and 0.73 for the Transformer. We argue that such explainable insights can enhance professional training by providing more targeted feedback and improving the overall training process.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.311 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.238 Zit.
"Why Should I Trust You?"
2016 · 14.210 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.104 Zit.