Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Empowering learners with AI‐generated content for programming learning and computational thinking: The lens of extended effective use theory
23
Zitationen
2
Autoren
2024
Jahr
Abstract
Abstract Background Artificial intelligence–generated content (AIGC) has stepped into the spotlight with the emergence of ChatGPT, making effective use of AIGC for education a hot topic. Objectives This study seeks to explore the effectiveness of integrating AIGC into programming learning through debugging. First, the study presents three levels of AIGC integration based on varying levels of abstraction. Then, drawing on extended effective use theory, the study proposes the underlying mechanism of how AIGC integration impacts programming learning performance and computational thinking. Methods Three debugging interfaces integrated with AIGC by ChatGPT were developed for this study according to three levels of AIGC integration design. The study conducts a between‐subject experiment with one control group and three experimental groups. Analysis of covariance and a structural equation model are employed to examine the effects. Results and Conclusions The results show that the second and third levels of abstraction in AIGC integration yield better learning performance and computational thinking, but the first level shows no difference compared to traditional debugging. The underlying mechanism suggests that the second and third levels of abstraction promote transparent interaction, which enhances representational fidelity and consequently impacts learning performance and computational thinking, as evidenced in test of the mechanism. Moreover, the study finds that learning fidelity weakens the effect of transparent interaction on representational fidelity. Our research offers valuable theoretical and practical insights.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.