Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Is AI Code Generation Undermining Developers’ Problem‑Solving Skills?
0
Zitationen
2
Autoren
2026
Jahr
Abstract
The rise of AI tools such as GitHub Copilot and ChatGPT has reshaped software development by providing substantial support for coding and debugging tasks. Although these tools enhance productivity and reduce routine workload, existing research has largely emphasized short-term efficiency gains, leaving their long-term cognitive and pedagogical effects insufficiently explored. This study investigates the cognitive trade-offs associated with sustained reliance on generative AI, with particular attention to students and junior developers. Recent empirical findings indicate that excessive dependence on AI assistance may weaken deep debugging skills, impede conceptual understanding, and challenge established educational practices in software engineering. To address these concerns, we synthesize empirical studies published since 2020 and draw on contemporary pedagogical theories to propose a structured framework for balanced AI integration. The proposed hybrid model shifts emphasis from full automation to a learning-oriented process that foregrounds exploration, human reasoning, and critical evaluation. It comprises three iterative phases—Detect (AI-assisted exploration), Engage (manual problem-solving and algorithmic reasoning), and Verify (AI-supported refinement)—designed to preserve core cognitive competencies while effectively leveraging automation. The study underscores the importance of aligning AI tool usage with pedagogical objectives, ensuring that system design promotes understanding rather than output generation alone. These findings have implications for curriculum design in computer science education and for industrial strategies aimed at sustaining developer expertise in increasingly automated environments.