Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Student Perceptions of Generative AI in Personalized Distance Learning: The Moderating Effects of Usage Frequency and Faculty Encouragement
0
Zitationen
2
Autoren
2025
Jahr
Abstract
As generative artificial intelligence (GAI) tools such as ChatGPT, Grammarly, and Quillbot become increasingly embedded in digital education, understanding how students perceive their role in personalized distance learning, across both asynchronous and synchronous modes, remains crucial. Anchored in Sustainable Development Goal 4 (SDG 4), which promotes inclusive and equitable quality education and lifelong learning opportunities for all, this study investigates how GAI utilization relates to students’ perceptions of usefulness, motivational impact, and ethical implications. It also examines whether usage frequency and faculty encouragement moderate these relationships. Using a descriptive-correlational design with moderation analysis, data were collected from 327 undergraduate students at Tagoloan Community College through a validated questionnaire (CVI = 0.94). Overall, findings revealed that students perceived GAI as highly beneficial for learning and self-motivation, reflecting growing confidence in using AI responsibly. Results from General Linear Modeling indicated that GAI utilization was a significant predictor of students’ perceptions (F = 169.32, p < .001, η²ₚ = .345), suggesting that direct engagement with AI tools strongly shapes educational value and learning experiences. However, neither usage frequency (F = 0.99, p = .396) nor faculty encouragement (F = 0.75, p = .475) significantly moderated this relationship. Interestingly, despite limited faculty support (11.3%) and the predominant use of ChatGPT (96.9%), students demonstrated ethical awareness through responses that emphasized citation practices, verification of AI-generated outputs, and the avoidance of plagiarism, indicating reflective and responsible learning behaviors. These findings highlight the primacy of student agency over institutional influence in fostering meaningful AI engagement. The study recommends that educators and institutions implement structured, ethical, and student-centered integration of AI tools into curricula through digital literacy workshops, academic integrity guidelines, and scaffolded AI-supported learning tasks to enhance autonomy, critical engagement, and responsible technology use, aligned with the objectives of SDG 4.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.