Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Two years after ChatGPT: a thematic analysis of First-Year students’ reflections on AI tool use in higher education
1
Zitationen
2
Autoren
2025
Jahr
Abstract
Purpose This study aims to investigate how first-year university students engage with generative AI tools, especially ChatGPT, in real academic contexts. As AI becomes more embedded in higher education, it is critical to understand how students use these technologies, what benefits they perceive and what concerns they raise. By analyzing 38 student reflections from a Fundamentals of Technology course, the study explores student motivations, behaviors and attitudes, contributing to a more grounded understanding of how generative AI is reshaping undergraduate learning practices. Design/methodology/approach Using an inductive thematic analysis, this qualitative study examines 38 voluntary student reflections on AI tool use in a first-year technology course. Thematic analysis followed Braun and Clarke’s six-phase process and was grounded in students’ authentic descriptions of their interactions with AI. No predefined codebook was used. Themes emerged around the purposes for which students used AI, perceived benefits, challenges and attitudes toward the technology. This bottom-up approach ensured findings remained closely tied to student voices and everyday academic experience. Findings Five major themes emerged: (1) purpose of AI use, (2) perceived benefits, (3) challenges with AI, (4) student attitudes and (5) specific AI tools used. Students reported using AI for writing support, problem-solving and idea generation. ChatGPT was the most frequently used tool, followed by Grammarly, QuillBot and Perplexity. While most reflections were positive, some students expressed concerns about overreliance, privacy and accuracy. The findings suggest students view AI as a flexible academic aid, but with limited reflection on potential drawbacks or ethical considerations. Research limitations/implications This study is limited by its sample size and context, focusing on a single course at one institution. Reflections were not initially collected for research purposes, which may impact depth and generalizability. However, the candid, realworld insights provide valuable starting points for understanding AI integration in early undergraduate education. Future research should explore AI use across diverse institutions and instructional contexts and investigate how students develop AI literacy and ethical reasoning over time. Practical implications As generative AI becomes commonplace in student workflows, educators must rethink assignment design, academic integrity policies and digital literacy instruction. This study suggests that many students already use AI for legitimate academic support, such as writing organization and math problem-solving. Rather than prohibiting these tools, faculty can integrate them into coursework, encouraging critical reflection on their use. Introducing prompt engineering and source evaluation into curricula may help students use AI responsibly while enhancing their learning outcomes. Social implications Student reflections indicate that AI tools like ChatGPT may help democratize academic support by offering accessible, on-demand assistance, particularly for those without access to tutoring or support services. However, concerns around data privacy, overreliance and potential impacts on independent thinking underscore the need for institutional guidance. These findings highlight broader tensions between innovation, equity and ethics in higher education’s digital transformation. Originality/value This paper offers a rare, grounded analysis of real student reflections on AI tool use in academic settings. While much existing research is hypothetical or policy-driven, this study centers student voices, providing insights into actual behaviors, motivations and concerns. By focusing on first-year students, the study reveals early patterns in AI adoption and contributes to conversations about ethical, instructional and infrastructural responses in higher education.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.