Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
EFL Students’ Use, Perceptions, and Reliance on Chat-GPT for Editing and Proofreading: A Technology Acceptance Model Perspective
0
Zitationen
4
Autoren
2025
Jahr
Abstract
Rapid growth of studies on Chat-GPT acceptance within the broader context of AI in education (AIEd) has provided valuable insights into how participants across settings perceive and use this tool for teaching and learning. This study replicates earlier investigations on AI acceptance but narrows the focus to a specific task: editing and proofreading. It also expands the inquiry to address ethical concerns and overreliance—two recurring themes in AIEd research. A modified extended TAM questionnaire covering seven aspects was distributed to 71 first-year EFL university students enrolled in a writing course that permitted Chat-GPT only for editing and proofreading, with clear restrictions. Group interviews were also conducted. Quantitative data were analyzed using descriptive statistics; qualitative data were examined thematically. Findings reveal a consistent three-step use of Chat-GPT: prompting, pasting the manuscript, and reviewing. Students treated AI output as a draft for enhancement, not as final work. Variation emerged in how much students revised AI-suggested edits, suggesting differing levels of reliance. The study confirms that perceived usefulness and ease of use contribute to students’ attitudes and intentions, moderated by self-image and subjective norms. While long-term dependency remains unclear, students appeared cautious when boundaries were set. This study suggests that when lecturers provide clear guidelines, students tend to view Chat-GPT as a learning aid and show awareness of academic integrity and authorship. The findings underline the need for well-defined institutional policies on AI use in writing instruction, while acknowledging the study’s contextual limitations and the need for further research.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.316 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.177 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.575 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.468 Zit.