OpenAlex · Aktualisierung stündlich · Letzte Aktualisierung: 16.03.2026, 15:09

Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.

Privacy-Aware Knowledge Distillation Based on Dynamic Sample Selection

2023·1 Zitationen
Volltext beim Verlag öffnen

1

Zitationen

3

Autoren

2023

Jahr

Abstract

Deep neural networks (DNNs) used in deep learning are usually designed to be complex, which makes them difficult to apply to resource-constrained mobile devices. Even if mobile devices can meet the resource requirements of these DNN models, they cannot be directly applied. Because such models contain privacy information, the direct application will pose a risk of privacy leakage. Differential privacy technology can be used to protect the private information in the model, but too many queries will weaken the degree of data privacy protection. In order to alleviate the above problems, this paper proposes a dynamic sample selection method, which can be used to select high-quality samples during model training, and the selected high-quality samples will be dynamically reduced as the training progresses. Therefore, the number of queries can be reduced by reducing the number of samples. To further achieve model compression, this paper proposes a privacy-aware knowledge distillation based on dynamic sample selection, which achieves model compression through knowledge distillation and uses dynamic sample selection to reduce the amount of samples, thereby reducing the number of queries and reducing privacy loss. Specifically, the student model trained in the self-learning stage is used to select high-quality samples, and then only high quality samples are used for distillation learning, thereby reducing the amount of samples for distillation learning. Since this paper uses differential privacy protection on the batch loss of distillation learning, and the batch size has been fixed, the smaller the number of samples, the fewer queries students need for distillation learning, thus providing stronger privacy protection for sensitive data. Experiments on the CIFAR-10 dataset show that the student model trained using the proposed method can achieve a compression ratio of 65% and an accuracy of 78%.

Ähnliche Arbeiten

Autoren

Institutionen

Themen

Privacy-Preserving Technologies in DataAdvanced Neural Network ApplicationsArtificial Intelligence in Healthcare and Education
Volltext beim Verlag öffnen