R-GAP: Recursive Gradient Attack on Privacy

Authors: Junyi Zhu, Matthew B. Blaschko

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results demonstrate that R-GAP works as well as or even better than optimization-based approaches at a fraction of the computation under certain conditions. Additionally, we propose a Rank Analysis method, which can be used to estimate the risk of gradient attacks inherent in certain network architectures, regardless of whether an optimization-based or closed-form-recursive attack is used. Experimental results demonstrate the utility of the rank analysis towards improving the network s security.
Researcher Affiliation Academia Junyi Zhu and Matthew Blaschko Dept. ESAT, Center for Processing Speech and Images KU Leuven, Belgium {junyi.zhu,matthew.blaschko}@esat.kuleuven.be
Pseudocode Yes Algorithm 1: R-GAP (Notation is consistent with Equation 6 to Equation 15) Data: i: i-th layer; Wi: weights; Wi: gradients; Result: x1 for i d to 1 do...
Open Source Code Yes Source code is available for download from https://github.com/Junyi Zhu-AI/R-GAP.
Open Datasets Yes Table 1: Comparison of the performance of R-GAP, DLG and H-GAP. MSE has been used to measure the quality of the reconstruction. *:CIFAR10 **:MNIST
Dataset Splits Yes Training 200 epochs on CIFAR10 and saving the model with the best performance on the validation set, three networks achieve a close accuracy.
Hardware Specification No The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments.
Software Dependencies No The paper mentions using specific optimizers (e.g., L-BFGS, Adam) and implicitly uses common machine learning frameworks, but it does not specify any software dependencies with version numbers.
Experiment Setup Yes We have randomly initialized the network, as DLG is prone to fail if the network is at a late stage of training (Geiping et al., 2020). Training 200 epochs on CIFAR10 and saving the model with the best performance on the validation set, three networks achieve a close accuracy. We use a CNN6 network as shown in Figure 3, which is full-rank considering gradient constraints and weight constraints. Additionally, we report results using a CNN6-d network, which is rank-deficient without consideration of virtual constraints, in order to to fairly compare the performance of DLG and R-GAP. CNN6-d has a CNN6 backbone and just decreases the output channel of the second convolutional layer to 20. The activation function is a Leaky Re LU except the last layer, which is a Sigmoid.