Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Critical or Confident? AI Literacy and Student–AI Collaboration in Higher Education
1
Zitationen
1
Autoren
2025
Jahr
Abstract
Large language models (LLMs) are rapidly transforming programming education, yet little is known about how students’ AI literacy and skills shape human–AI collaboration. We conducted an exploratory study with 23 computer science and robotics students who completed discipline-specific programming tasks in which the LLM produced the code and students guided and debugged it. AI literacy, measured with the SNAIL instrument, was generally high, with Practical Application rated highest and Critical Appraisal lowest, suggesting a risk of overconfidence. No robust links emerged between AI literacy, programming skill, task duration, or perceived chatbot output quality. Qualitative feedback revealed both enthusiasm for experimentation and frustration with AI errors. Findings indicate that brief, well-structured tasks can support productive AI–student collaboration, but stronger emphasis on critical evaluation, domain-specific skills, and appropriate scaffolding is needed.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.402 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.270 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.702 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.507 Zit.