OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 12.04.2026, 23:49

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

From intention to enactment: the Cognitive–Motivational Divergence Model (CMDM) as a mechanism-oriented explanation for ChatGPT adoption gaps among EFL translation teachers

2026·0 Zitationen·Humanities and Social Sciences CommunicationsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

In resource-constrained higher education settings, teachers’ acceptance of generative AI (GenAI) often fails to translate into routine classroom practice, producing a persistent intention–enactment gap. To explain this gap, we propose the Cognitive–Motivational Divergence Model (CMDM) as a mechanism-oriented complement to the Unified Theory of Acceptance and Use of Technology (UTAUT). Using an explanatory sequential mixed-methods design with 134 EFL translation teachers in Gansu, China, an ordinal SEM showed that Perceived Advantages positively predicted Behavioral Intention (β = 0.419) and Application Willingness (β = 0.384), whereas Implementation Concerns negatively predicted intention (β = −0.164). Solution Strategies increased Application Willingness (β = 0.265) and indirectly increased intention via willingness (βindirect = 0.063). All reported coefficients are statistically significant (p < 0.05) unless noted. The model explained substantial variance in Behavioral Intention on the underlying latent-response (threshold) scale (R² = 0.525). Follow-up interviews with a linked subsample (n = 21) developed four CMDM mechanisms, namely Motivational Inhibition, Institutional Enablers, Bounded Agency, and Efficacy-Driven Integration, which specify how intention is enacted, bounded, or redirected under institutional ambiguity and infrastructural precarity. These mechanisms were synthesized into three enactment pathways (low-visibility enactment, cautious sandboxed integration, and structured governance and coordination-seeking) that showed no one-to-one mapping onto quantitative attitudinal profiles in this linked subsample. This pattern implies a routing problem: even at similar intention levels, enactment varies with how teachers set boundaries (scope and stakes), manage visibility (low-profile vs. sandboxed use), and operationalize disclosure (documentation and attribution) to make use defensible and auditable. We therefore argue that support should move beyond generic AI literacy toward governance infrastructure (clear rules, pedagogical sandboxes, and disclosure templates) that distributes risk and makes bounded integration defensible.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAI in Service InteractionsEthics and Social Impacts of AI
Volltext beim Verlag öffnen