Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Perspectives of Undergraduate Students on the Effects of Generative AI on Academic Achievement
0
Zitationen
2
Autoren
2025
Jahr
Abstract
With the growing integration of generative Artificial Intelligence (AI) tools such as ChatGPT, Claude, DeepSeek, and Gemini in higher education, understanding their impact on undergraduate learning has become increasingly essential. This study investigated how students in the United Arab Emirates perceive the academic influence of generative AI, drawing on data from 1260 undergraduate survey responses. Utilizing the Generative AI Academic Impact Scale (GAI-AIS), the research explored students’ frequency of AI use, self-rated proficiency, perceived academic benefits, and ethical considerations. Statistical analyses, including Analysis of Variance (ANOVA), Pearson correlation, and multiple linear regression, revealed that frequent AI use and greater conceptual understanding were positively associated with academic performance. However, a negative relationship between self-rated proficiency and Cumulative Grade Point Average (CGPA) suggested possible overconfidence or misalignment between perceived and actual skills. Thematic analysis of open responses indicated that students predominantly use AI for writing support, test preparation, and clarification of academic concepts. In response to these findings, the study proposes the development of an AI-Based Academic Support System (AI-BASS), a centralized, curriculum-aligned platform designed to facilitate structured, ethical, and pedagogically grounded AI use in undergraduate education. By promoting responsible engagement, enhancing digital literacy, and safeguarding academic integrity, AI-BASS offers a forward-thinking framework for integrating generative AI within higher education. These insights contribute to broader discussions on AI in academia and inform institutional policies and curriculum design.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.674 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.583 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 8.105 Zit.
BioBERT: a pre-trained biomedical language representation model for biomedical text mining
2019 · 6.862 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.