Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Adopting Generative AI in Higher Education: A Dual-Perspective Study of Students and Lecturers in Saudi Universities
3
Zitationen
3
Autoren
2025
Jahr
Abstract
The integration of Generative Artificial Intelligence (GenAI) tools, such as ChatGPT, into higher education has introduced new opportunities and challenges for students and lecturers alike. This study investigates the psychological, ethical, and institutional factors that shape the adoption of GenAI tools in Saudi Arabian universities, drawing on an extended Technology Acceptance Model (TAM) that incorporates constructs from Self-Determination Theory (SDT) and ethical decision-making. A cross-sectional survey was administered to 578 undergraduate students and 309 university lecturers across three major institutions in Southern Saudi Arabia. Quantitative analysis using Structural Equation Modelling (SmartPLS 4) revealed that perceived usefulness, intrinsic motivation, and ethical trust significantly predicted students’ intention to use GenAI. Perceived ease of use influenced intention both directly and indirectly through usefulness, while institutional support positively shaped perceptions of GenAI’s value. Academic integrity and trust-related concerns emerged as key mediators of motivation, highlighting the ethical tensions in AI-assisted learning. Lecturer data revealed a parallel set of concerns, including fear of overreliance, diminished student effort, and erosion of assessment credibility. Although many faculty members had adapted their assessments in response to GenAI, institutional guidance was often perceived as lacking. Overall, the study offers a validated, context-sensitive model for understanding GenAI adoption in education and emphasises the importance of ethical frameworks, motivation-building, and institutional readiness. These findings offer actionable insights for policy-makers, curriculum designers, and academic leaders seeking to responsibly integrate GenAI into teaching and learning environments.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.