Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Understanding learner adoption of generative AI-powered Ed-Tech applications by dissertation-based master students
0
Zitationen
4
Autoren
2026
Jahr
Abstract
Purpose This study aims to investigate the elements that influence master students’ behavioral intentions to use generative artificial intelligence (AI) in educational contexts. It examines attitudes toward technology, effort expectancy and performance expectancy, with knowledge sharing as a mediating variable, to develop targeted interventions for enhancing the adoption of generative AI in Education Technology (Ed-Tech). Design/methodology/approach The study uses a stratified random sampling method. This sampling technique ensured that participants from various academic disciplines and dissertation themes were well-represented in the sample, thereby increasing data variety and representativeness. The population size was approximately 6,034, and a sample of 392 participants was chosen for the study. The study employed a tripartite approach, utilizing IBM SPSS and AMOS to evaluate the validity and reliability of the investigated constructs. Structural equation modeling was then applied to test the proposed hypotheses. Findings The results emphasize the importance of Ed-Tech competencies, effort expectancy and performance expectancy in determining students’ intent to use generative AI. Furthermore, the mediating role of knowledge sharing emphasizes its influence on technological adoption. Originality/value This study provides practical implications for academic institutions by informing tailored approaches to optimize student learning outcomes, dissertation progress and graduate employability. Through a comprehensive framework, it aims to promote inclusive technology access and create an environment conducive to maximizing the potential of generative AI-Ed-Tech in enhancing student success.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.