Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Privacy-Aware Knowledge Distillation Based on Dynamic Sample Selection
1
Zitationen
3
Autoren
2023
Jahr
Abstract
Deep neural networks (DNNs) used in deep learning are usually designed to be complex, which makes them difficult to apply to resource-constrained mobile devices. Even if mobile devices can meet the resource requirements of these DNN models, they cannot be directly applied. Because such models contain privacy information, the direct application will pose a risk of privacy leakage. Differential privacy technology can be used to protect the private information in the model, but too many queries will weaken the degree of data privacy protection. In order to alleviate the above problems, this paper proposes a dynamic sample selection method, which can be used to select high-quality samples during model training, and the selected high-quality samples will be dynamically reduced as the training progresses. Therefore, the number of queries can be reduced by reducing the number of samples. To further achieve model compression, this paper proposes a privacy-aware knowledge distillation based on dynamic sample selection, which achieves model compression through knowledge distillation and uses dynamic sample selection to reduce the amount of samples, thereby reducing the number of queries and reducing privacy loss. Specifically, the student model trained in the self-learning stage is used to select high-quality samples, and then only high quality samples are used for distillation learning, thereby reducing the amount of samples for distillation learning. Since this paper uses differential privacy protection on the batch loss of distillation learning, and the batch size has been fixed, the smaller the number of samples, the fewer queries students need for distillation learning, thus providing stronger privacy protection for sensitive data. Experiments on the CIFAR-10 dataset show that the student model trained using the proposed method can achieve a compression ratio of 65% and an accuracy of 78%.
Ähnliche Arbeiten
k-ANONYMITY: A MODEL FOR PROTECTING PRIVACY
2002 · 8.395 Zit.
Calibrating Noise to Sensitivity in Private Data Analysis
2006 · 6.872 Zit.
Deep Learning with Differential Privacy
2016 · 5.595 Zit.
Communication-Efficient Learning of Deep Networks from Decentralized\n Data
2016 · 5.591 Zit.
Large-Scale Machine Learning with Stochastic Gradient Descent
2010 · 5.564 Zit.