Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Students’ Perceptions of Generative Artificial Intelligence (GenAI) Use in Academic Writing in English as a Foreign Language
20
Zitationen
4
Autoren
2025
Jahr
Abstract
While research articles on students’ perceptions of large language models such as ChatGPT in language learning have proliferated since ChatGPT’s release, few studies have focused on these perceptions among English as a foreign language (EFL) university students in South America or their application to academic writing in a second language (L2) for STEM classes. ChatGPT can generate human-like text that worries teachers and researchers. Academic cheating, especially in the language classroom, is not new; however, the concept of AI-giarism is novel. This study evaluated how 56 undergraduate university students in Ecuador viewed GenAI use in academic writing in English as a foreign language. The research findings indicate that students worried more about hindering the development of their own writing skills than the risk of being caught and facing academic penalties. Students believed that ChatGPT-written works are easily detectable, and institutions should incorporate plagiarism detectors. Submitting chatbot-generated text in the classroom was perceived as academic dishonesty, and fewer participants believed that submitting an assignment machine-translated from Spanish to English was dishonest. The results of this study will inform academic staff and educational institutions about how Ecuadorian university students perceive the overall influence of GenAI on academic integrity within the scope of academic writing, including reasons why students might rely on AI tools for dishonest purposes and how they view the detection of AI-based works. Ideally, policies, procedures, and instruction should prioritize using AI as an emerging educational tool and not as a shortcut to bypass intellectual effort. Pedagogical practices should minimize factors that have been shown to lead to the unethical use of AI, which, for our survey, was academic pressure and lack of confidence. By and large, these factors can be mitigated with approaches that prioritize the process of learning rather than the production of a product.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.