Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Global Educator Typologies for ChatGPT Adoption: Data-Driven Insights into Support Gaps and AI-Enhanced Teaching
0
Zitationen
6
Autoren
2025
Jahr
Abstract
Generative-AI tools such as ChatGPT are spreading rapidly through higher education, yet instructor uptake remains uneven and poorly characterised. Leveraging the openly licensed ChatGPT Teacher Survey $(\mathrm{n}=318$ instructors, 25 countries, six continents), this study delivers a quantitative typology of educator responses and pinpoints the institutional factors that shape them. Four adoption indicators-prior exposure, perceived curriculum impact, perceived assessment impact, and institutional-support adequacy-were z-standardised and clustered via k-means. The three-cluster solution (silhouette $=0.23$; Calinski-Harabasz = 99.4) yielded AI Enthusiasts (24 %), Cautious Integrators (40 %), and Sceptics (36 %). Multinomial-logistic analysis with HC3-robust errors shows region is the only significant predictor: instructors in Australasia-Asia are $3.0 \times$ more likely to be Sceptics $(95 \%$ CI [1.5, 6.2]), while European faculty are $2.7 \times$ more likely to be Integrators (CI [1.2, 6.1]); gender and teaching experience are non-significant. Support perceptions diverge sharply— 0 % of Enthusiasts versus 46 % of Integrators and 84% of Sceptics report inadequate institutional backing. Performance evaluations amplify this divide: in grading ChatGPT answers $(n=141)$, Enthusiasts award a mean mark of 78/100, significantly higher than Integrators $(66 / 100; F(2,138) =3.69, p=0.027, \eta^{2}=0.05$; Cohen’s $d=0.48$). Robustness checks-split-sample validation (Adjusted Rand $=0.86$), hierarchical clustering, and alternative imputation-confirm solution stability. These findings expose a substantial support gap for three-quarters of educators and demonstrate that regional context, not personal demographics, drives adoption stance. The typology framework offers actionable personas for targeted professional development and typology-aware AIsystem design; all code and derived data are openly shared for replication.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.