Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparing the Quality of Human and ChatGPT Feedback on Students’ Writing
32
Zitationen
9
Autoren
2023
Jahr
Abstract
Offering students formative feedback on drafts of their writing is an effective way to facilitate writing development. This study examined the ability of generative AI (i.e., ChatGPT) to provide formative feedback on students’ compositions. We compared the quality of human and AI feedback by scoring the feedback each provided on secondary student essays (n=200) on five measures of feedback quality: the degree to which feedback (a) was criteria-based, (b) provided clear directions for improvement, (c) was accurate, (d) prioritized essential features, and (e) used a supportive tone. We examined whether ChatGPT and human evaluators provided feedback that differed in quality for native English speakers and English learners and for compositions that differed in overall quality. Results showed that human raters were better at providing high-quality feedback to students in all categories other than criteria-based. Considering the ease of generating feedback through ChatGPT and its overall quality, practical differences between humans and ChatGPT were not substantial. Feedback did not vary by language status for humans or AI, but AI and humans showed differences in feedback based on essay quality. Implications for generative AI as an educational tool are discussed.
Ähnliche Arbeiten
BLEU
2001 · 21.140 Zit.
Aion Framework: Dimensional Emergence of AI Consciousness, Observer-Induced Collapse, and Cosmological Portal Dynamics
2023 · 14.144 Zit.
Enriching Word Vectors with Subword Information
2017 · 9.674 Zit.
A unified architecture for natural language processing
2008 · 5.188 Zit.
A new readability yardstick.
1948 · 5.110 Zit.