Dies ist eine Übersichtsseite mit Metadaten zu dieser wissenschaftlichen Arbeit. Der vollständige Artikel ist beim Verlag verfügbar.
Understanding Training-Data Leakage from Gradients in Neural Networks\n for Image Classification
6
Zitationen
2
Autoren
2021
Jahr
Abstract
Federated learning of deep learning models for supervised tasks, e.g. image\nclassification and segmentation, has found many applications: for example in\nhuman-in-the-loop tasks such as film post-production where it enables sharing\nof domain expertise of human artists in an efficient and effective fashion. In\nmany such applications, we need to protect the training data from being leaked\nwhen gradients are shared in the training process due to IP or privacy\nconcerns. Recent works have demonstrated that it is possible to reconstruct the\ntraining data from gradients for an image-classification model when its\narchitecture is known. However, there is still an incomplete theoretical\nunderstanding of the efficacy and failure of such attacks. In this paper, we\nanalyse the source of training-data leakage from gradients. We formulate the\nproblem of training data reconstruction as solving an optimisation problem\niteratively for each layer. The layer-wise objective function is primarily\ndefined by weights and gradients from the current layer as well as the output\nfrom the reconstruction of the subsequent layer, but it might also involve a\n'pull-back' constraint from the preceding layer. Training data can be\nreconstructed when we solve the problem backward from the output of the network\nthrough each layer. Based on this formulation, we are able to attribute the\npotential leakage of the training data in a deep network to its architecture.\nWe also propose a metric to measure the level of security of a deep learning\nmodel against gradient-based attacks on the training data.\n
Ähnliche Arbeiten
Rethinking the Inception Architecture for Computer Vision
2016 · 30.338 Zit.
MobileNetV2: Inverted Residuals and Linear Bottlenecks
2018 · 24.418 Zit.
An Image is Worth 16x16 Words: Transformers for Image Recognition at Scale
2020 · 21.303 Zit.
CBAM: Convolutional Block Attention Module
2018 · 21.301 Zit.
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification
2015 · 18.499 Zit.