Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Artificial intelligence in international English language testing system writing assessments: A comparative study of human ratings and DeepAI
0
Zitationen
2
Autoren
2025
Jahr
Abstract
The International English Language Testing System (IELTS) is a high-stakes exam where Writing Task 2 significantly influences the overall scores, requiring reliable evaluation. While trained human raters perform this task, concerns about subjectivity and inconsistency have led to growing interest in artificial intelligence (AI)-based assessment tools. However, little empirical evidence exists on AI in high-stakes testing, and no study has examined DeepAI in this context. Accordingly, using a repeated measures design, this study investigated the comparability and reliability of human and DeepAI ratings of 145 IELTS Writing Task 2 essays collected from the official IELTS Tehran Test Centre. These essays had been previously scored by certified human examiners and were subsequently rescored by DeepAI using a rubric-based prompt based on IELTS standards. Statistical analyses, including paired sample t-tests and multivariate analysis of variance, were conducted to explore rater differences and scoring alignment. The results revealed no significant differences in the overall band scores between the human and AI assessments; however, minor differences were observed in some specific criteria. Additionally, DeepAI showed strong intra-rater reliability, producing consistent scores over a two-week interval. These findings suggest that DeepAI may serve as a reliable supplementary tool in high-stakes writing assessments. However, full replacement of human judgment remains premature, and a combination of human judgment and AI support may be the most effective approach.
Ähnliche Arbeiten
Cutoff criteria for fit indexes in covariance structure analysis: Conventional criteria versus new alternatives
1999 · 102.165 Zit.
Common method biases in behavioral research: A critical review of the literature and recommended remedies.
2003 · 73.085 Zit.
Evaluating Structural Equation Models with Unobservable Variables and Measurement Error
1981 · 64.419 Zit.
Evaluating Structural Equation Models with Unobservable Variables and Measurement Error
1981 · 60.112 Zit.
Coefficient Alpha and the Internal Structure of Tests
1951 · 42.540 Zit.