Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Tool, tutor, or crutch?: A grounded theory of cognitive scaffolding and offloading in AI-assisted programming education
0
Zitationen
3
Autoren
2026
Jahr
Abstract
The integration of generative AI into programming education has produced a widely reported tension between performance and learning. We distinguish immediate task performance from genuine learning: durable, transferable conceptual understanding and evaluative skill, and examine how AI support shapes learning processes, not merely outcomes. While many studies document improved speed, accuracy, and affect with AI support, questions remain about the quality of underlying learning. We use a constructivist grounded-theory design with constant comparison across two naturally occurring course sections (an AI-enabled section and a human pair-programming section used as a theoretical contrast). Over one semester, we collected triangulation across data sources from undergraduate Java programming students–interaction logs, pre-/post-course concept maps, and semi-structured dyadic interviews (AI-enabled: $$N=24$$; theoretical contrast: $$N=17$$). Analysis revealed a core tension between students’ pursuit of “Domain Mastery” (conceptualization, explanation, and evaluation) and “Tool Mastery” (procedural efficiency with AI). We identified dynamic strategy switching (the Strategic Dance), Partnership Framing with an Illusion of Dialogue subtheme, and two recurrent evaluation challenges (Trust-but-Can’t-Verify for novices; a Boilerplate Blindspot for more experienced students). We also describe attenuated meta-cognitive calibration–a mismatch between perceived readiness and independent capability–co-occurring with sustained offloading patterns. These categories synthesize into a process-level tension model with two recurrent loops (Scaffolding and Offloading), interpreted through Cognitive Load Theory and Self-Determination Theory. We offer a theory-building account that helps explain how widely observed performance and affect gains can co-occur with thinner opportunities for germane processing and authorship. The model generates testable implications (e.g., critique-the-AI phases, planned fading, verification journals), and we invite multi-site tests to evaluate boundary conditions.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.303 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.155 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.555 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.453 Zit.