Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Comparative Evaluation of ChatGPT and DeepSeek for Competitive Programming: International Collegiate Programming Contest Case
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The International Collegiate Programming Contest (ICPC) is widely regarded as one of the most prestigious algorithmic programming competitions for university students. Given the challenges faced by students from developing countries in preparing for the contest, it is important to examine how generative AI tools can support their learning and preparation. The effectiveness of two leading generative AI models, ChatGPT and DeepSeek, is evaluated in addressing complex programming problems based on the Association for Computing Machinery (ACM)’s International Collegiate Programming Contest (ICPC) (ACM ICPC). The evaluation of both models in terms of readability, error handling, computation speed, code accuracy, and educational value is presented in this study. In a two‐trial experimental setup, both models are evaluated on 145 different ICPC problems from data structures, algorithms, mathematics, geometry, advanced optimization, and so on. Prompts were standardized, and evaluation was conducted over two iterations to simulate iterative learning. The results indicate that both DeepSeek and ChatGPT improved their performance over time. DeepSeek consistently outperformed ChatGPT in code accuracy (88.28% vs. 84.14%), both generated more efficient algorithms for linear time complexity (41 vs. 19), and had lower logical error rates (7.58% vs. 15.86%). DeepSeek and ChatGPT performed almost the same in code quality scores (37.79 vs. 37.85). Approximately 46.90% of the solutions generated by DeepSeek were fully insightful, surpassing ChatGPT’s 42.07%. However, ChatGPT demonstrated significant improvement across trials, particularly drastically reducing syntax errors from 4.83% to 0.69%. DeepSeek outperforms ChatGPT in high‐stakes programming scenarios, making it the more suitable choice. These results offer actionable guidance for incorporating generative AI tools into advanced programming education.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.200 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.051 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.416 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.410 Zit.