OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.03.2026, 15:55

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Viability of Large Language Models as CS Theory Tutors

2026·0 Zitationen
Volltext beim Verlag öffnen

0

Zitationen

3

Autoren

2026

Jahr

Abstract

Large Language Models (LLMs) promise explanations at a scale that traditional office-hours or even intelligent tutoring systems struggle to match. However, their suitability for Computer Science subjects such as Theory of Computing (ToC) remains unanswered due to how LLMs can frequently hallucinate information; the goal of ToC courses is proving precise statements rigorously. In this poster we evaluate OpenAI's GPT-4 model across 18 ToC sub-topics involving regular languages, context-free languages, Turing machines, and (un)decidability. We generated realistic ''average-student'' questions and follow-up ones and then scored each answer with a five-criterion rubric: accuracy, completeness, clarity, pedagogical scaffolding, and quality of follow-up questions. Our overall results show that GPT-4 is marginal at performing as a ToC tutor, and our analysis identifies strengths in conceptual explanation and weak spots for proof-oriented questions, e.g., reductions.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationExplainable Artificial Intelligence (XAI)Intelligent Tutoring Systems and Adaptive Learning
Volltext beim Verlag öffnen