Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Investigating How Prompt Language Choice Affects the Security of ChatGPT-Generated Code
0
Zitationen
5
Autoren
2026
Jahr
Abstract
With the widespread application of generative large language models (such as ChatGPT) in code generation, the security of their output code has attracted increasing attention. However, the impact of suggest language on the security of generated code has not been fully studied. This paper compares and analyses the Python and Java codes generated by ChatGPT under English and Chinese prompts, and systematically evaluates its security differences. Experiments show that the code vulnerabilities generated by English prompts are significantly less than those generated by Chinese prompts, indicating that the choice of prompt language has an important impact on code security. This finding provides practical guidance for developers to optimize prompt engineering, and it is recommended to give priority to the use of English prompts in scenarios with high security requirements.
Ähnliche Arbeiten
Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI
2019 · 8.245 Zit.
Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead
2019 · 8.100 Zit.
High-performance medicine: the convergence of human and artificial intelligence
2018 · 7.466 Zit.
Proceedings of the 19th International Joint Conference on Artificial Intelligence
2005 · 5.776 Zit.
Peeking Inside the Black-Box: A Survey on Explainable Artificial Intelligence (XAI)
2018 · 5.429 Zit.