Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
ChatGPT in English Writing Assessment: Can AI Accurately Measure Complexity, Accuracy, and Fluency Indices?
0
Zitationen
3
Autoren
2025
Jahr
Abstract
This paper reports a practice within a Malaysian university of using ChatGPT to assess English writing proficiency through complexity, accuracy, and fluency (CAF) indices. We compared ChatGPT-generated CAF indices with expert analyses in an academic English writing test. Results showed that ChatGPT’s scores aligned well with expert ratings in syntactic complexity and fluency but exhibited inconsistencies in accuracy and lexical complexity, with the latter often underestimated by ChatGPT. To address these issues, we proposed human-in-the-loop validation, where experts reviewed and refined AI-generated outputs. While artificial intelligence demonstrated efficiency in linguistic quantification of English writing CAF, its variability in nuanced assessments highlights the need for ongoing refinement. Future research should further examine the reliability of artificial intelligence across different writing contexts and analytic measures.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.312 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.169 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.564 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.466 Zit.