Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Illusion of Understanding: How Middle-Schoolers Fail to Regulate Inquiry with ChatGPT in a Science Task
1
Zitationen
5
Autoren
2025
Jahr
Abstract
Generative AI (GenAI) tools allow for effortless task completion, potentially fostering cognitive and metacognitive laziness in students. While surveys indicate widespread GenAI use among students as young as 11, their interactions strategies remain under-explored. A critical indicator of these interactions' quality is the ability to lead Question-Asking (QA) cycles: initiating goal-oriented inquiries, critically evaluating AI responses, and regulating subsequent strategies. While these behaviors predict robust learning in traditional settings, their role in AI-mediated environments remains unclear. Addressing this gap, this study investigates middle school students' (N=63, aged 14--15) capacity to adopt these behaviors with GenAI during science investigation tasks. We analyzed their proficiency in distinguishing efficient goal-oriented prompt from inefficient ones, their critical evaluation of AI responses, and their ability to generate follow-up questions to regulate learning in alignment with their informational needs. Findings reveal a pattern of over-reliance: students struggled to discriminate between prompt types, failed to detect vague AI explanations, and frequently terminated inquiry prematurely, without follow-up. Consequently, task performance remained moderate despite unrestricted AI access and high self-reported prior knowledge. Notably, positive AI attitudes were negatively associated with interaction quality, suggesting a disconnect between perceived and actual competence, whereas higher metacognitive skills predicted superior sensitivity to prompt quality. These results underscore the necessity for AI literacy interventions that move beyond technical understanding to explicitly train metacognitive regulation strategies, required for meaningful and sustainable QA-based learning with GenAI.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.