Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Perceived Learning vs. Engagement in AI-Assisted Homework: A Comparative Study of ChatGPT Use Across High School, University, and Teachers in Sonora, Mexico (2024–2025)
0
Zitationen
5
Autoren
2026
Jahr
Abstract
This study examines how generative AI is adopted and experienced across educational levels in Sonora, Mexico, and whether students’ perceived learning aligns with engagement behaviors during AI-assisted homework. We analyze survey data from 2024–2025 covering 1477 participants (high school and university students and teachers) from public and private institutions, including adoption, perceived learning and time savings, help-seeking preferences (teachers vs. ChatGPT vs. Google), and ethical concerns. To move beyond self-reports alone, we introduce a Learning Engagement Index (LEI; 0–1) based on three student behaviors when using ChatGPT to complete academic tasks: reading AI responses, modifying outputs, and integrating personal ideas. Adoption was widespread but consistently higher in university than in high school for both students and teachers. University students reported slightly higher perceived learning and greater time savings. LEI scores were generally high and higher among university students, indicating more frequent engagement behaviors such as reading and adapting AI outputs rather than copying them. However, perceived learning showed only weak alignment with LEI, suggesting that students’ self-assessments do not consistently track the engagement actions measured by the index. A complementary GitHub Copilot Free (version GPT-4) experiment (n = 16) indicated faster task completion and improved task completeness, while also highlighting the risk of reduced algorithmic reasoning when AI suggestions are used uncritically. Overall, the findings point to the need for pedagogical approaches that emphasize guided use, verification practices, and assessment designs that more directly evidence learning in AI-mediated settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.339 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.211 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.614 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.478 Zit.