Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Changing EAP assessment practices in the age of generative artificial intelligence: The case of Scottish higher education institutions
2
Zitationen
2
Autoren
2025
Jahr
Abstract
The impact of generative artificial intelligence (GenAI) on higher education has been widely discussed since the public release of ChatGPT-3.5 in late 2022. However, there has been little empirical research on changes in English-for-Academic-Purposes (EAP) assessment practices in response to GenAI. This qualitative case study intends to fill this gap by examining how Scottish universities changed EAP assessments in response to GenAI, how effective those changes were perceived by EAP academics, and what recommendations EAP academics offered for future assessment practices. Data were collected from six semi-structured interviews conducted with EAP academics at five Scottish universities in mid-2024 and thematically analysed. The findings reveal that while substantial changes in assessment task design were limited, modifications to task requirements (e.g., GenAI declarations, context-specific prompts) and grading practices were more common. Moreover, our participants expressed scepticism about the effectiveness of some changes (e.g., AI use declarations) but positively perceived others (e.g., the use of context-specific questions, spontaneous speaking tasks, and named marking). As for their recommendations, the participating EAP academics generally advocated authentic and innovative tasks, such as portfolio-based assessment, reflections, multimodal projects, and GenAI output evaluation over reverting to traditional exams while simultaneously highlighting issues with workload and learning outcomes. The study implies a need for clearer institutional guidance, ongoing professional dialogue, and support for experimentation with GenAI-integrated assessment design in EAP contexts. • Substantial changes in EAP assessment task design were limited. • Modifications to task-specific requirements and grading were more common. • Participants were skeptical about procedural changes like AI use declarations. • They were positive about using specific and spontaneous tasks and named marking. • Participants advocated innovative tasks over reverting to traditional exams.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.