Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Analysis of Students’ Difficulties in Using ChatGPT to Solve Routine Mechanics of Motion Problems
0
Zitationen
5
Autoren
2026
Jahr
Abstract
This study analyzes university students’ difficulties in using ChatGPT to solve routine mechanics of motion problems by mapping challenges across the problem-solving cycle and explaining how these difficulties emerge during student–AI interactions. A sequential explanatory mixed-methods design was employed. In the quantitative phase, 70 Physics Education and Science Education undergraduates who had completed Basic Physics or Mechanics and had used ChatGPT for learning completed a 24-item Likert questionnaire covering six dimensions: problem representation, prompt formulation, understanding solution steps, evaluation and verification, integration into one’s own solution, and self-regulation/technical constraints. Descriptive statistics, ANOVA with post-hoc tests, and correlation analyses were conducted. The overall difficulty level was moderate (M ≈ 3.22), with 61.4% in the moderate category and 18.6% in the high category. Evaluation and verification emerged as the most critical difficulty (M ≈ 3.69; 45.7% high). Significant differences were found by semester and frequency of ChatGPT use, but not by study program; early-semester and rare users reported higher difficulty, especially in verification. Correlations indicated a chain linking prompting, understanding, and verification (e.g., D3–D4 r = 0.62). In the qualitative phase, interviews and reflections with nine students (high/moderate/low difficulty) showed that incomplete problem representation and reactive prompt revision led to superficial understanding and premature trust in AI outputs, with limited unit, sign, and plausibility checks. The findings highlight verification as the main bottleneck and support instructional designs that foreground modeling, evaluative routines, and metacognitive regulation in AI-supported physics learning.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.393 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.259 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.688 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.781 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.502 Zit.