Understanding Deep Gradient Leakage via Inversion Influence Functions
Authors: Haobo Zhang, Junyuan Hong, Yuyang Deng, Mehrdad Mahdavi, Jiayu Zhou
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate that I2F effectively approximated the DGL generally on different model architectures, datasets, modalities, attack implementations, and perturbation-based defenses. |
| Researcher Affiliation | Academia | Haobo Zhang Michigan State University zhan2060@msu.edu Junyuan Hong Michigan State University University of Texas at Austin jyhong@utexas.edu Yuyang Deng Pennsylvania State University yzd82@psu.edu Mehrdad Mahdavi Pennsylvania State University mahdavi@cse.psu.edu Jiayu Zhou Michigan State University jiayuz@msu.edu |
| Pseudocode | No | The paper describes the proposed Inversion Influence Function (I2F) and its derivations mathematically and in descriptive text, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our codes are provided in https://github.com/illidanlab/inversion-influence-function. |
| Open Datasets | Yes | We evaluate our metric on two image-classification datasets: MNIST (Le Cun, 1998) and CIFAR10 (Krizhevsky et al., 2009). |
| Dataset Splits | No | The paper mentions using datasets like MNIST, CIFAR10, and ImageNet for evaluation, but it does not provide specific details on how the datasets were split into training, validation, and test sets (e.g., percentages or counts). |
| Hardware Specification | Yes | All the experiments are conducted on one NVIDIA RTX A5000 GPU with the Py Torch framework. |
| Software Dependencies | No | All the experiments are conducted on one NVIDIA RTX A5000 GPU with the Py Torch framework. |
| Experiment Setup | Yes | The learning rate of the two attacks is 0.1 and we use Adam as the optimizer. To consider a more powerful attack, only a single image is reconstructed in each inversion. When inverting Le Net, we uniformly initialize the model parameters in the range of [ 0.5, 0.5] as (Sun et al., 2020) to get a stronger attack. |