Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Ignore Previous Prompt: Attack Techniques For Language Models
77
Zitationen
2
Autoren
2022
Jahr
Abstract
Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3's stochastic nature, creating long-tail risks. The code for PromptInject is available at https://github.com/agencyenterprise/PromptInject.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.718 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 25.033 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.862 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.511 Zit.
Xception: Deep Learning with Depthwise Separable Convolutions
2017 · 18.732 Zit.