Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
A Large-Scale Empirical Study of AI-Generated Code in Real-World Repositories
0
Zitationen
5
Autoren
2026
Jahr
Abstract
Large language models (LLMs) are increasingly used in software development, generating code that ranges from short snippets to substantial project components. As AI-generated code becomes more common in real-world repositories, it is important to understand how it differs from human-written code and how AI assistance may influence development practices. However, existing studies have largely relied on small-scale or controlled settings, leaving a limited understanding of AI-generated code in the wild. In this work, we present a large-scale empirical study of AI-generated code collected from real-world repositories. We examine both code-level properties, including complexity, structural characteristics, and defect-related indicators, and commit-level characteristics, such as commit size, activity patterns, and post-commit evolution. To support this study, we develop a detection pipeline that combines heuristic filtering with LLM-based classification to identify AI-generated code and construct a large-scale dataset for analysis. Our study provides a comprehensive view of the characteristics of AI-generated code in practice and highlights how AI-assisted development differs from conventional human-driven development. These findings contribute to a better understanding of the real-world impact of AI-assisted programming and offer an empirical basis for future research on AI-generated software.