Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
The Paradox of Augmentation and Erosion: Navigating Utility, Critical Agency, and TrustDeficit in AI Writing Tools among CambodianEnglish Major Undergraduates
0
Zitationen
1
Autoren
2025
Jahr
Abstract
This qualitative study aims to investigate the complex experiences, perceptions, and concerns of Cambodian English major undergraduate students regarding the use of Artificial Intelligence (AI) tools (e.g., ChatGPT, Grammarly, and Quillbot) for writing. The study employed thematic analysis on 143 open-ended survey responses. The participants were undergraduate students majoring in English at public and private universities in Cambodia. Four major themes were identified based on the thematic analysis. First, AI as an augmentative tool: Efficiency and skill support highlighted AI's perceived benefits in accelerating drafting, structuring ideas, and improving grammar. In addition, the paradox of dependence: Balancing AI utility with critical agency, revealed users' deep anxiety over the risk of cognitive erosion and their proactive emphasis on the need for critical self-regulation. Furthermore, trust deficit: The challenge of accuracy and contextual failure detailing concerns over factual errors, generic output, and the inability of AI to handle specific local contexts. Finally, practical barriers: Financial, technical, and accessibility limitations—identified cost and unreliable performance as constraints. The findings confirm a paradox of augmentation and erosion, where AI is viewed as an essential tool for efficiency and a threat to intellectual integrity. The research provides scholars with an understanding of how sociocultural, economic, and pedagogical conditions in developing educational contexts actively shape AI adoption among students. This study underscores the urgent need for pedagogical interventions that promote critical digital literacy and self-regulatory strategies, particularly in contexts where practical barriers and trust deficits shape user interaction with generative AI technologies.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.493 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.377 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.835 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.555 Zit.