OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 02:19

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Investigating How Prompt Language Choice Affects the Security of ChatGPT-Generated Code

2026·0 Zitationen·Frontiers in artificial intelligence and applicationsOpen Access
Volltext beim Verlag öffnen

0

Zitationen

5

Autoren

2026

Jahr

Abstract

With the widespread application of generative large language models (such as ChatGPT) in code generation, the security of their output code has attracted increasing attention. However, the impact of suggest language on the security of generated code has not been fully studied. This paper compares and analyses the Python and Java codes generated by ChatGPT under English and Chinese prompts, and systematically evaluates its security differences. Experiments show that the code vulnerabilities generated by English prompts are significantly less than those generated by Chinese prompts, indicating that the choice of prompt language has an important impact on code security. This finding provides practical guidance for developers to optimize prompt engineering, and it is recommended to give priority to the use of English prompts in scenarios with high security requirements.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Artificial Intelligence in Healthcare and EducationAdversarial Robustness in Machine LearningAdvanced Malware Detection Techniques
Volltext beim Verlag öffnen