OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 22.03.2026, 11:27

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Complexity Analysis of LLM-Generated Recursive Code: A Systematic Evaluation

2025·0 Zitationen·VFAST Transactions on Software EngineeringOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2025

Jahr

Abstract

Programming is an essential skill, but it can be difficult for beginners, especially when it comes to logical concepts like recursion. Despite the development of many computational and pedagogical methods to simplify programming, recursion remains a challenging topic to understand, implement, and debug. Artificial intelligence has led to the development of large language models (LLMs), such as ChatGPT, Gemini, and DeepSeek that can generate programming source code. Various studies have analyzed the quality of code produced by LLM. However, the complexity of the recursive code generated by these models has not been studied. This study compared and analyzed recursive Python programs generated by Gemini (2.5 Pro), DeepSeek (V3.1) and ChatGPT (GPT-5) in an attempt to fill this gap. For the study, 250 programs generated by each model were examined using Halsted and cyclomatic complexity metrics. The results showed that ChatGPT produced less complex code, indicating easier recursion, while DeepSeek produced more complex programs due to higher Halstead and cyclomatic complexity scores. Gemini programs have a medium level of difficulty. The Kruskal-Wallis test was used to further analyze the data, and it revealed significant differences between the recursive code generated by ChatGPT, DeepSeek, and Gemini. Overall, the study found that each LLM has a distinct pattern: ChatGPT emphasizes simplicity, Gemini takes a balanced approach, and DeepSeek's generated code promotes clarity but suffers from complexity. More comprehensive analysis will be conducted in the future by expanding the dataset and including larger language models.

Ähnliche Arbeiten