Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
From intention to enactment: the Cognitive–Motivational Divergence Model (CMDM) as a mechanism-oriented explanation for ChatGPT adoption gaps among EFL translation teachers
0
Zitationen
5
Autoren
2026
Jahr
Abstract
In resource-constrained higher education settings, teachers’ acceptance of generative AI (GenAI) often fails to translate into routine classroom practice, producing a persistent intention–enactment gap. To explain this gap, we propose the Cognitive–Motivational Divergence Model (CMDM) as a mechanism-oriented complement to the Unified Theory of Acceptance and Use of Technology (UTAUT). Using an explanatory sequential mixed-methods design with 134 EFL translation teachers in Gansu, China, an ordinal SEM showed that Perceived Advantages positively predicted Behavioral Intention (β = 0.419) and Application Willingness (β = 0.384), whereas Implementation Concerns negatively predicted intention (β = −0.164). Solution Strategies increased Application Willingness (β = 0.265) and indirectly increased intention via willingness (βindirect = 0.063). All reported coefficients are statistically significant (p < 0.05) unless noted. The model explained substantial variance in Behavioral Intention on the underlying latent-response (threshold) scale (R² = 0.525). Follow-up interviews with a linked subsample (n = 21) developed four CMDM mechanisms, namely Motivational Inhibition, Institutional Enablers, Bounded Agency, and Efficacy-Driven Integration, which specify how intention is enacted, bounded, or redirected under institutional ambiguity and infrastructural precarity. These mechanisms were synthesized into three enactment pathways (low-visibility enactment, cautious sandboxed integration, and structured governance and coordination-seeking) that showed no one-to-one mapping onto quantitative attitudinal profiles in this linked subsample. This pattern implies a routing problem: even at similar intention levels, enactment varies with how teachers set boundaries (scope and stakes), manage visibility (low-profile vs. sandboxed use), and operationalize disclosure (documentation and attribution) to make use defensible and auditable. We therefore argue that support should move beyond generic AI literacy toward governance infrastructure (clear rules, pedagogical sandboxes, and disclosure templates) that distributes risk and makes bounded integration defensible.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.436 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.311 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.753 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.523 Zit.