Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Generative Artificial Intelligence for Automated Qualitative Feedback: A Cross-Comparison of Prompting Strategies
1
Zitationen
2
Autoren
2026
Jahr
Abstract
Recent studies have highlighted the potential of generative artificial intelligence, such as ChatGPT, to address challenges in providing accurate and pedagogically relevant feedback. However, empirical evidence on how prompt engineering shapes feedback quality remains limited. This study examined how zero-shot, few-shot and chain-of-thought prompting strategies influenced the accuracy and depth of ChatGPT-generated qualitative feedback on second language (L2) essays. A total of 176 essays from Filipino and Thai learners with intermediate English proficiency were evaluated using ChatGPT-4o under the three prompting strategies. The findings showed that few-shot prompting achieved the highest accuracy, while chain-of-thought prompting produced the most elaborated feedback, particularly in addressing grammatical complexity. Zero-shot prompting lagged in both accuracy and depth, with notable issues in grammatical feedback. Implications for L2 writing instruction, assessment and research are discussed.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.102 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.468 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.