Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring trust in generative AI for higher education institutions: a systematic literature review focused on educators
2
Zitationen
4
Autoren
2025
Jahr
Abstract
Although Generative Artificial Intelligence (GenAI) offers transformative opportunities for higher education, its adoption by educators remains limited, primarily due to trust concerns. This systematic literature review aims to synthesise peer-reviewed research conducted between 2019 and August 2024 on the factors influencing educators’ trust in GenAI within higher education institutions. Using PRISMA 2020 guidelines, this study identified 37 articles at the intersection of trust factors, technology adoption, and GenAI impact in higher education from educators’ perspectives. Our analysis reveals that existing AI trust frameworks fail to capture the pedagogical and institutional dimensions specific to higher education contexts. We propose a new conceptual model focused on three dimensions affecting educators’ trust: (1) individual factors (demographics, pedagogical beliefs, sense of control, and emotional experience), (2) institutional strategies (leadership support, policies, and training support), and (3) the socio-ethical context of their interaction. Our findings reveal a significant gap in institutional leadership support, whereas professional development and training were the most frequently mentioned strategies. Pedagogical and socio-ethical considerations remain largely underexplored. The practical implications of this study emphasise the need for institutions to strengthen leadership engagement, align GenAI adoption strategies with educators’ values, and develop comprehensive training frameworks that address ethical and pedagogical concerns. This work contributes a multidimensional view of educators’ trust in GenAI and provides a foundation for future research.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.292 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.143 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.539 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.452 Zit.