Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Academics as adopters of generative AI: an application of diffusion of innovations theory
2
Zitationen
2
Autoren
2025
Jahr
Abstract
Abstract The background for this research is an ongoing discussion towards generative AI in the higher education context. This study aimed to understand the factors influencing the adoption of generative artificial intelligence by academicians, utilizing the Diffusion of Innovations (DOI) theory. Drawing on Rogers’ Diffusion of Innovations (DOI) theory, this study investigates the antecedents of ChatGPT adoption among 640 academics from ten major Polish universities. Seven hypotheses were tested using partial least squares structural equation modeling (PLS‑SEM). The results reveal that relative advantage (β = 0.240), compatibility (β = 0.214), and perceived complexity (β = 0.383) significantly influence behavioural intention, which in turn strongly predicts actual use (β = 0.558). Trialability exerts a modest but significant effect on intention (β = 0.071), whereas observability is non‑significant (β = − 0.004). Personal innovativeness further enhances actual use (β = 0.209). Collectively, the model explains 49.6% of variance in behavioural intention and 45.0% in actual usage. The results suggest that ChatGPT is perceived by academicians as a tool that facilitates and enhances academic and teaching work. The study fills a gap in the literature regarding the adoption of ChatGPT in academia from the DOI perspective. The findings highlight the importance of factors such as complexity and relative advantage in the adoption process of technological innovations in higher education. Further research is recommended on the implementation of AI tools in teaching and their impact on the efficiency of academic work.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.239 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.095 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.463 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.428 Zit.