Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
How Secure is Code Generated by ChatGPT?
9
Zitationen
4
Autoren
2023
Jahr
Abstract
In recent years, large language models have been responsible for great advances in the field of artificial intelligence (AI). ChatGPT in particular, an AI chatbot developed and recently released by OpenAI, has taken the field to the next level. The conversational model is able not only to process human-like text, but also to translate natural language into code. However, the safety of programs generated by ChatGPT should not be overlooked. In this paper, we perform an experiment to address this issue. Specifically, we ask ChatGPT to generate a number of program and evaluate the security of the resulting source code. We further investigate whether ChatGPT can be prodded to improve the security by appropriate prompts, and discuss the ethical aspects of using AI to generate code. Results suggest that ChatGPT is aware of potential vulnerabilities, but nonetheless often generates source code that are not robust to certain attacks.
Ähnliche Arbeiten
A detailed analysis of the KDD CUP 99 data set
2009 · 4.645 Zit.
The Sybil Attack
2002 · 4.361 Zit.
Practical Black-Box Attacks against Machine Learning
2017 · 3.423 Zit.
UNSW-NB15: a comprehensive data set for network intrusion detection systems (UNSW-NB15 network data set)
2015 · 3.423 Zit.
An Intrusion-Detection Model
1987 · 3.324 Zit.