Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Exploiting Novel GPT-4 APIs
3
Zitationen
5
Autoren
2023
Jahr
Abstract
Language model attacks typically assume one of two extreme threat models: full white-box access to model weights, or black-box access limited to a text generation API. However, real-world APIs are often more flexible than just text generation: these APIs expose "gray-box" access leading to new threat vectors. To explore this, we red-team three new functionalities exposed in the GPT-4 APIs: fine-tuning, function calling and knowledge retrieval. We find that fine-tuning a model on as few as 15 harmful examples or 100 benign examples can remove core safeguards from GPT-4, enabling a range of harmful outputs. Furthermore, we find that GPT-4 Assistants readily divulge the function call schema and can be made to execute arbitrary function calls. Finally, we find that knowledge retrieval can be hijacked by injecting instructions into retrieval documents. These vulnerabilities highlight that any additions to the functionality exposed by an API can create new vulnerabilities.
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.327 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.399 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.297 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.274 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.492 Zit.