Inverting Gradients - How easy is it to break privacy in federated learning?

Authors: Jonas Geiping, Hartmut Bauermeister, Hannah Dröge, Michael Moeller

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Figure 1: Reconstruction of an input image x from the gradient θLθ(x, y). Left: Image from the validation dataset. Middle: Reconstruction from a trained Res Net-18 trained on Image Net. Right: Reconstruction from a trained Res Net-152. In both cases, the intended privacy of the image is broken. Note that previous attacks cannot recover either Image Net-sized data [35] or attack trained models.
Researcher Affiliation Academia Jonas Geiping Hartmut Bauermeister Hannah Dröge Michael Moeller Dep. of Electrical Engineering and Computer Science University of Siegen {jonas.geiping, hartmut.bauermeister, hannah.droege, michael.moeller }@uni-siegen.de
Pseudocode No The paper describes methods in text but does not include any structured pseudocode or algorithm blocks.
Open Source Code Yes We provide a pytorch implementation at https://github.com/Jonas Geiping/invertinggradients.
Open Datasets Yes We measure the mean PSNR of the reconstruction of 32 × 32 CIFAR-10 images over the first 100 images from the validation set using the same shallow and smooth CNN as in [35], which we denote as 'Le Net (Zhu)' as well as a Res Net architecture, both with trained and untrained parameters.
Dataset Splits Yes We measure the mean PSNR of the reconstruction of 32 × 32 CIFAR-10 images over the first 100 images from the validation set using the same shallow and smooth CNN as in [35], which we denote as 'Le Net (Zhu)' as well as a Res Net architecture, both with trained and untrained parameters.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments.
Software Dependencies No The paper mentions providing a 'pytorch implementation' but does not specify the version number of PyTorch or any other software dependencies.
Experiment Setup Yes This attack is, due to the double backpropagation, roughly twice as expensive as a single minibatch step per gradient step on the objective eq. (4). In this work, we conservatively run the attack for up to 24000 iterations, with a relatively small step size... We allow a generous setting of 16 restarts of the L-BFGS solver. ... Even for a high number of 100 local gradient descent steps the reconstruction quality is unimpeded. The only failure case we were able to exemplify was induced by picking a high learning rate of 1e-1. This setup, however, corresponds to a step size that would lead to a divergent training update, and as such does not provide useful model updates.