Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Who gets to use ChatGPT? A global study on digital access and inequality in higher education
0
Zitationen
3
Autoren
2026
Jahr
Abstract
This study examines how national-level digital development – measured by the ICT Development Index (IDI) – affects university students’ use of ChatGPT. Special emphasis is placed on mediating factors that may influence this relationship, including technical access, institutional and linguistic support, and individual background characteristics, particularly in relation to educational equity and sustainability (SDG 4). The analysis is based on survey data from 20,242 students across 58 countries and applies multivariate statistical methods, including logistic regression, PLS-SEM modeling, and cluster analysis. The results indicate that students in countries with higher IDI scores are more likely to use ChatGPT, primarily because of more advanced digital competencies and greater technological access. The country of study proved to be a stronger predictor than citizenship, underscoring the key role of the local educational environment. Functional access emerged as the most decisive mediating factor, while institutional and linguistic support had a more indirect effect on usage. Cluster analysis identified three distinct student profiles and highlighted that a high level of digital infrastructure alone does not ensure the widespread adoption of generative AI tools. The study proposes a multi-level interpretive framework: at the macro level, national digital infrastructure; at the meso level, institutional and linguistic support; and at the micro level, individual characteristics – connected by functional access as a mediating dimension. This context-sensitive approach contributes to a more comprehensive and practice-oriented understanding of digital inequalities and the integration of generative AI in higher education, offering guidance for promoting inclusive and sustainable technology use.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.