Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Complexity Analysis of LLM-Generated Recursive Code: A Systematic Evaluation
0
Zitationen
5
Autoren
2025
Jahr
Abstract
Programming is an essential skill, but it can be difficult for beginners, especially when it comes to logical concepts like recursion. Despite the development of many computational and pedagogical methods to simplify programming, recursion remains a challenging topic to understand, implement, and debug. Artificial intelligence has led to the development of large language models (LLMs), such as ChatGPT, Gemini, and DeepSeek that can generate programming source code. Various studies have analyzed the quality of code produced by LLM. However, the complexity of the recursive code generated by these models has not been studied. This study compared and analyzed recursive Python programs generated by Gemini (2.5 Pro), DeepSeek (V3.1) and ChatGPT (GPT-5) in an attempt to fill this gap. For the study, 250 programs generated by each model were examined using Halsted and cyclomatic complexity metrics. The results showed that ChatGPT produced less complex code, indicating easier recursion, while DeepSeek produced more complex programs due to higher Halstead and cyclomatic complexity scores. Gemini programs have a medium level of difficulty. The Kruskal-Wallis test was used to further analyze the data, and it revealed significant differences between the recursive code generated by ChatGPT, DeepSeek, and Gemini. Overall, the study found that each LLM has a distinct pattern: ChatGPT emphasizes simplicity, Gemini takes a balanced approach, and DeepSeek's generated code promotes clarity but suffers from complexity. More comprehensive analysis will be conducted in the future by expanding the dataset and including larger language models.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.287 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.140 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.534 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.450 Zit.