OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 15.04.2026, 20:35

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Lost at C: A User Study on the Security Implications of Large Language Model Code Assistants

2022·39 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

39

Zitationen

6

Autoren

2022

Jahr

Abstract

Large Language Models (LLMs) such as OpenAI Codex are increasingly being used as AI-based coding assistants. Understanding the impact of these tools on developers' code is paramount, especially as recent work showed that LLMs may suggest cybersecurity vulnerabilities. We conduct a security-driven user study (N=58) to assess code written by student programmers when assisted by LLMs. Given the potential severity of low-level bugs as well as their relative frequency in real-world projects, we tasked participants with implementing a singly-linked 'shopping list' structure in C. Our results indicate that the security impact in this setting (low-level C with pointer and array manipulations) is small: AI-assisted users produce critical security bugs at a rate no greater than 10% more than the control, indicating the use of LLMs does not introduce new security risks.

Ähnliche Arbeiten

Autoren

Themen

Software Engineering ResearchArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen