OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 17.05.2026, 19:09

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Ignore Previous Prompt: Attack Techniques For Language Models

2022·77 Zitationen·arXiv (Cornell University)Open Access
Volltext beim Verlag öffnen

77

Zitationen

2

Autoren

2022

Jahr

Abstract

Transformer-based large language models (LLMs) provide a powerful foundation for natural language tasks in large-scale customer-facing applications. However, studies that explore their vulnerabilities emerging from malicious user interaction are scarce. By proposing PromptInject, a prosaic alignment framework for mask-based iterative adversarial prompt composition, we examine how GPT-3, the most widely deployed language model in production, can be easily misaligned by simple handcrafted inputs. In particular, we investigate two types of attacks -- goal hijacking and prompt leaking -- and demonstrate that even low-aptitude, but sufficiently ill-intentioned agents, can easily exploit GPT-3's stochastic nature, creating long-tail risks. The code for PromptInject is available at https://github.com/agencyenterprise/PromptInject.

Ähnliche Arbeiten

Autoren

Themen

Adversarial Robustness in Machine LearningTopic ModelingArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen