Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
THE-X: Privacy-Preserving Transformer Inference with Homomorphic Encryption
73
Zitationen
9
Autoren
2022
Jahr
Abstract
As more and more pre-trained language models adopt on-cloud deployment, the privacy issues grow quickly, mainly for the exposure of plain-text user data (e.g., search history, medical record, bank account). Privacy-preserving inference of transformer models is on the demand of cloud service users. To protect privacy, it is an attractive choice to compute only with ciphertext in homomorphic encryption (HE). However, enabling pre-trained models inference on ciphertext data is difficult due to the complex computations in transformer blocks, which are not supported by current HE tools yet. In this work, we introduce THE-X, an approximation approach for transformers, which enables privacy-preserving inference of pre-trained models developed by popular frameworks. THE-X proposes a workflow to deal with complex computation in transformer networks, including all the non-polynomial functions like GELU, softmax, and Layer-Norm. Experiments reveal our proposed THE-X can enable transformer inference on encrypted data for different downstream tasks, all with negligible performance drop but enjoying the theory-guaranteed privacy-preserving advantage.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.389 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.864 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.590 Zit.
Deep Learning with Differential Privacy
2016 · 5.571 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.558 Zit.