Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
An Evaluation of the Safety of ChatGPT with Malicious Prompt Injection
13
Zitationen
2
Autoren
2024
Jahr
Abstract
<title>Abstract</title> Artificial intelligence systems, particularly those involving sophisticated neural network architectures like ChatGPT, have demonstrated remarkable capabilities in generating human-like text. However, the susceptibility of these systems to malicious prompt injections poses significant risks, necessitating comprehensive evaluations of their safety and robustness. The study presents a novel automated framework for systematically injecting and analyzing malicious prompts to assess the vulnerabilities of ChatGPT. Results indicate substantial rates of harmful responses across various scenarios, highlighting critical areas for improvement in model defenses. The findings underscore the importance of advanced adversarial training, real-time monitoring, and interdisciplinary collaboration to enhance the ethical deployment of AI systems. Recommendations for future research emphasize the need for robust safety mechanisms and transparent model operations to mitigate the risks associated with adversarial inputs.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.338 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.418 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.303 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.301 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.499 Zit.