Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploring and Comparing Scaffolding Strategies of ChatGPT-3.5 and a Customized GPT for Reading Comprehension
0
Zitationen
2
Autoren
2025
Jahr
Abstract
This study compares scaffolding strategies generated by ChatGPT-3.5 and a customized GPT in reading comprehension exercises to assist Thai university students in achieving a minimum CEFR B2 level as a requirement for Thai bachelor’s degree programs. A prompt for ChatGPT-3.5 was designed to generate four reading passages, each with five multiple-choice questions. A similar approach was used to configure a customized GPT, also with a prepared file containing four reading passages and five multiple-choice questions. Data were collected based on the responses from both versions when two incorrect and one correct answer were selected respectively for each question. The results revealed that the customized GPT generated more meaningful and diverse scaffolding strategies, whereas ChatGPT-3.5 produced consistent but limited responses focused on specific reading strategies. Furthermore, the study found that some valuable strategies, such as misconception correction and the promotion of critical thinking, were absent in ChatGPT-3.5. While both versions offer educational value, they differ in the depth and range of scaffolds provided. Educators and researchers should carefully consider these differences when integrating generative AI into instructional design. In particular, this study highlights the importance of grounding AI-assisted learning in established pedagogical theories, such as scaffolding, to support core language skills like reading comprehension. As generative AI becomes more common in classrooms, thoughtful implementation and training for both instructors and students will be key to maximizing its potential within the Thai educational context.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.250 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.109 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.482 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.434 Zit.