Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Is ChatGPT a Rational Assistant for University Students During Mathematical Reasoning?
0
Zitationen
3
Autoren
2026
Jahr
Abstract
Abstract The study focuses on university students’ engagement with ChatGPT regarding calculus concepts. It examines the influence of ChatGPT on university students’ mathematical reasoning. Two university students with high and low academic performance were prompted to reason about the relationship between the concavity of a function and the tangent line individually. The students were then asked to reason through the solution of the task together with ChatGPT. The structural and process aspects of the reasoning of ChatGPT and students were analyzed using Toulmin’s model and Habermas’ construct of rationality. The results revealed that, within the structural aspect of reasoning, students sought support from ChatGPT for building and transforming representations in relation to the data, claim, warrant, and backing components. In the process aspect of reasoning, students consulted ChatGPT for comparing, exemplifying, and justifying processes. In most cases, ChatGPT’s responses to students were found not to meet the requirements of epistemic rationality. Students’ evaluation and use of ChatGPT’s responses varied by performance level; compared with the low-performing student, the high-performing student was able to filter errors in ChatGPT’s responses and draw on them for inspiration to behave more rationally within both the structural and process aspects of reasoning. ChatGPT was identified as a tool that could be used by teachers to make students’ behaviors during reasoning more visible and adjustable. Further research is needed to investigate the potential limitations of using ChatGPT during mathematical reasoning in unsupervised out-of-class settings.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.324 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.189 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.588 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.470 Zit.