Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Bridging Human and Artificial Intelligence: Modeling Human Learning with Explainable AI Tools
0
Zitationen
4
Autoren
2026
Jahr
Abstract
We address a gap in Machine Learning–human alignment research by proposing that methods from Explainable AI (XAI) can be repurposed to quantitatively model human learning. To achieve alignment between human experts and Machine Learning (ML) models, we must first be able to explain the problem-solving strategies of human experts with the same rigor we apply to ML models. To demonstrate this approach, we model expertise in the complex domain of particle accelerator operations. Analyzing 14 years of operational text logs, we construct weighted graphs where nodes represent operational subtasks and edges capture their strategic relationships. We then examine these strategic models across four granularity levels. Our analysis reveals statistically significant changes with expertise at three of four graph levels. Remarkably, despite numerous possible ways to partition subtasks, operators across all expertise levels demonstrate a striking consistency in high-level strategy, partitioning the task into the same three functional communities. This suggests a shared “divide and conquer” cognitive framework. Expertise develops within this stable framework, as experts exhibit greater cognitive flexibility (forming more cross-community connections) and build more refined internal models. The primary contribution of this work is a methodology for creating a quantitative, interpretable baseline of expert human performance. This provides a “ground truth” for future research in alignment between humans and ML models, enabling a new approach to verification: the ML model’s representation of the task can be quantitatively compared against the human expert benchmark to measure their alignment. This paves the way for building safer, more interpretable partnerships between humans and ML models.
Ähnliche Arbeiten
Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization
2017 · 20.336 Zit.
Generative Adversarial Nets
2023 · 19.841 Zit.
Visualizing and Understanding Convolutional Networks
2014 · 15.241 Zit.
"Why Should I Trust You?"
2016 · 14.227 Zit.
On a Method to Measure Supervised Multiclass Model’s Interpretability: Application to Degradation Diagnosis (Short Paper)
2024 · 13.114 Zit.