Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Practical Impact of ChatGPTin Introduction to Computer Science Course: Exam Score and Real Learning Effectiveness
0
Zitationen
2
Autoren
2025
Jahr
Abstract
This study evaluates the impact of ChatGPT on student learning and test performance through a structured 18-week experiment conducted in an introductory computer science course at Soochow University. A total of 117 students participated, divided into a control group (59 students, no ChatGPT assistance) and an experimental group (58 students, ChatGPT-assisted testing). The experiment included four exams, a midterm, and a final exam, along with pre- and post-study surveys to assess students' perceptions of ChatGPT. The results indicate a consistent improvement in test scores among students in the experimental group who used ChatGPT for test assistance. Compared to the control group, the average score difference was 19.1 points in the second exam, 26.901 points in the third exam, and 40.389 points in the fourth exam-an overall increase of approximately 47.7 % from the second to the fourth exam. These findings highlight the importance of precise input prompts in generating accurate responses. Additionally, as students became more proficient in using ChatGPT, their trust in the tool also increased, raising concerns about potential overreliance. While ChatGPT-assisted assessments provided short-term performance gains, long-term knowledge retention remained limited. Although the experimental group outperformed the control group in ChatGPT-assisted tests, the final exam results (where neither group had ChatGPT support) showed only a 2.99 % difference. This suggests that students who relied on ChatGPT during their studies struggled to retain concepts when AI assistance was removed. These findings underscore the need for a balanced approach to integrating ChatGPT into education. While ChatGPT has significant potential as an instructional aid, careful implementation is crucial to prevent over-dependence and to foster deeper, independent learning.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.260 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.116 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.493 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.438 Zit.