Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
CoDAE: Adapting Large Language Models for Education via Chain-of-Thought Data Augmentation
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Large Language Models (LLMs) are increasingly employed as AI tutors due to their scalability and potential for personalized instruction. However, off-the-shelf LLMs often underperform in educational settings: they frequently reveal answers too readily, fail to adapt their responses to student uncertainty, and remain vulnerable to emotionally manipulative prompts. To address these challenges, we introduce CoDAE, a framework that adapts LLMs for educational use through Chain-of-Thought (CoT) data augmentation. We collect real-world dialogues between students and a ChatGPT-based tutor and enrich them using CoT prompting to promote step-by-step reasoning and pedagogically aligned guidance. Furthermore, we design targeted dialogue cases to explicitly mitigate three key limitations: over-compliance, low response adaptivity, and threat vulnerability. We fine-tune four open-source LLMs on different variants of the augmented datasets and evaluate them in simulated educational scenarios using both automatic metrics and LLM-as-a-judge assessments. Our results show that models fine-tuned with CoDAE deliver more pedagogically appropriate guidance, better support reasoning processes, and effectively resist premature answer disclosure.
Ähnliche Arbeiten
A spreading-activation theory of semantic processing.
1975 · 8.043 Zit.
Cognitive Load During Problem Solving: Effects on Learning
1988 · 7.920 Zit.
International Conference on Learning Representations (ICLR 2013)
2013 · 6.258 Zit.
Learning from delayed rewards
1989 · 5.471 Zit.
Comprehension: A Paradigm for Cognition
1998 · 4.772 Zit.