High-Fidelity Gradient Inversion in Distributed Learning
Authors: Zipeng Ye, Wenjian Luo, Qi Zhou, Yubo Tang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments demonstrate the superiority of our approach, reveal the potential vulnerabilities of the distributed learning paradigm, and emphasize the necessity of developing more secure mechanisms. |
| Researcher Affiliation | Academia | Zipeng Ye1, 2, Wenjian Luo1, 2, 3*, Qi Zhou1, 2, Yubo Tang1, 2 1School of Computer Science and Technology, Harbin Institute of Technology, Shenzhen 2Guangdong Provincial Key Laboratory of Novel Security Intelligence Technologies 3Peng Cheng Laboratory |
| Pseudocode | Yes | Pseudocode of this part is provided in Appendix A. |
| Open Source Code | Yes | Source code is available at https://github.com/Mi Lab-HITSZ/2023Ye HFGrad Inv. |
| Open Datasets | Yes | We conduct experiments for large-scale image classification task using Image Net ILSVRC 2012 dataset (Deng et al. 2009), as well as randomly collected images from Web. |
| Dataset Splits | Yes | We randomly select 64 images from the validation set of Image Net, and their label distributions are shown in Fig. 4 (a). |
| Hardware Specification | Yes | optimized with 15K iterations on NVIDIA TITAN RTX GPUs. |
| Software Dependencies | No | The paper mentions using PyTorch library and Adam optimizer, but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | The dropout rate we used in (10) is set as 0.3. We set β = 2 in (11), ˆλtv = 0.01, ˆλBN = 10 4, T1 = 3, 000, T2 = 5, 000 in scheduling strategy. We use Adam (Kingma and Ba 2014) for optimization with a step learning rate decay, and each batch is optimized with 15K iterations on NVIDIA TITAN RTX GPUs. |