Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
University teachers at the crossroads: unpacking their intentions toward ChatGPT's instructional use
11
Zitationen
5
Autoren
2024
Jahr
Abstract
Purpose The objective of this study was to elucidate the intentions of university teachers regarding the utilization of ChatGPT for instructional purposes. Design/methodology/approach In this cross-sectional quantitative research, data were collected through an online survey tool from 493 university teachers across Pakistan. Findings The findings revealed that positive attitudes and a sense of perceived behavioral control had a positive impact on teachers' adoption of ChatGPT for instructional purposes. Conversely, subjective norms exhibited a significant negative influence. The results underscore that teachers are inclined to embrace ChatGPT for instructional cause due to their recognition of its educational utility. However, it does not appear that their social environment, which includes their coworkers and managers, has a significant impact on how they decide what to do. Research limitations/implications The findings bear implications for devising relevant policies that support AI integration in curricula and assessments and teachers’ professional development (PD) programs. There is a need for formulating guidelines at the universities and the policy tiers to make the ChatGPT use more relevant. Future research should strive to generate insights toward AI use in the areas of curriculum, assessment and teachers’ PD. Originality/value The study adds to the relatively new literature on the integration of ChatGPT in higher education. This study’s findings contribute to the body of knowledge related to AI’s pedagogical use and set future directions to consider factors influencing meaningful and responsible use of AI in teaching and learning.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.