Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparative Study of Large Language Models for Adaptive AI Tutoring Systems
0
Zitationen
3
Autoren
2025
Jahr
Abstract
The study described here is a comparative study of a selection of top Large Language Models (LLMs)—namely Google Gemini, OpenAI's ChatGPT, xAI's Grok, Mistral, and Cerebras—in a dynamic AI-driven tutor system that is capable of adaptive learning. The study used LLMs to assess a learner's knowledge by presenting topic questions that the LLM generates and evaluates automatically, and then delivers a different question difficulty level and/or content type (e.g. blog post, tutorial, video, technical document) based on an automatic evaluation of the learner’s knowledge and abilities. From a learning perspective, all models have capabilities in terms of interaction for educational purposes; however, they each differ in terms of important metrics such as accuracy, reasoning, adaptability, cost-effectiveness, and student independence. Quantitative and qualitative assessments were made in specific technical domains you might be familiar with (e.g. cloud computing, web framework, etc). ChatGPT provides the greatest clarity and reasoning in feedback; Gemini has the most responsive low-latency with the most cost-effective deployment; Mistral provided accuracy and clarity in answers; Cerebras was strong in structured topics and consistency; and Grok was developing in creativity but inconsistent in structured learning flows overall. The study will help to inform the practical trade-offs between LLMs which made for cases in adaptive tutoring systems and the scalability of real-time educational platforms. The integration of explanation tools like SHAP and LIME for greater transparency. This research offers empirical evidence of the adoptive tutoring capabilities of LLMs and provides a roadmap for classroom-based verification in the future.
Ähnliche Arbeiten
A spreading-activation theory of semantic processing.
1975 · 8.019 Zit.
Cognitive Load During Problem Solving: Effects on Learning
1988 · 7.673 Zit.
International Conference on Learning Representations (ICLR 2013)
2013 · 6.255 Zit.
Learning from delayed rewards
1989 · 5.452 Zit.
Comprehension: A Paradigm for Cognition
1998 · 4.771 Zit.