Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Multi-step Jailbreaking Privacy Attacks on ChatGPT
140
Zitationen
7
Autoren
2023
Jahr
Abstract
With the rapid progress of large language models (LLMs), many downstream NLP tasks can be well solved given appropriate prompts. Though model developers and researchers work hard on dialog safety to avoid generating harmful content from LLMs, it is still challenging to steer AI-generated content (AIGC) for the human good. As powerful LLMs are devouring existing text data from various domains (e.g., GPT-3 is trained on 45TB texts), it is natural to doubt whether the private information is included in the training data and what privacy threats can these LLMs and their downstream applications bring. In this paper, we study the privacy threats from OpenAI's ChatGPT and the New Bing enhanced by ChatGPT and show that application-integrated LLMs may cause new privacy threats. To this end, we conduct extensive experiments to support our claims and discuss LLMs' privacy implications. © 2023 Association for Computational Linguistics.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.390 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.866 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.590 Zit.
Deep Learning with Differential Privacy
2016 · 5.572 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.558 Zit.