Revisiting Gradient Pruning: A Dual Realization for Defending against Gradient Attacks
Authors: Lulu Xue, Shengshan Hu, Ruizhi Zhao, Leo Yu Zhang, Shengqing Hu, Lichao Sun, Dezhong Yao
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments show that DGP can effectively defend against the most powerful GIAs and reduce the communication cost without sacrificing the model s utility. |
| Researcher Affiliation | Collaboration | Lulu Xue1, Shengshan Hu1*, Ruizhi Zhao1, Leo Yu Zhang2, Shengqing Hu3, Lichao Sun4, Dezhong Yao5 1 School of Cyber Science and Engineering, Huazhong University of Science and Technology 2 School of Information and Communication Technology, Griffith University 3 Department of Nuclear Medicine, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology 4 Department of Computer Science and Engineering, Lehigh University 5 School of Computer Science and Technology, Huazhong University of Science and Technology |
| Pseudocode | Yes | Algorithm 1: Dual Gradient Pruning (DGP). Algorithm 2: A Complete Illustration of Our Defense. |
| Open Source Code | No | The paper does not provide a specific link or explicit statement about the release of its source code. |
| Open Datasets | Yes | We assess model privacy against various attacks and evaluate model performance on CIFAR10 and CIFAR100, which is a common setting used in many studies (Huang et al. 2021; Gao et al. 2021). |
| Dataset Splits | No | The paper does not explicitly specify the training/validation/test splits, only mentioning datasets like CIFAR10 and CIFAR100 are used with a common setting. |
| Hardware Specification | Yes | We run the experiments with PyTorch by using one RTX 2080 Ti GPU and a 2.10 GHz CPU. |
| Software Dependencies | No | The paper mentions using PyTorch but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | We follow the setting of (Gao et al. 2021), using ten users with the same data distribution. We assess model privacy against various attacks and evaluate model performance on CIFAR10 and CIFAR100... We adhere to the DP settings of (Sun et al. 2021) and use Gaussian noise with standard deviation σ = 10-2. For Top-k and DGP, we set k = 20%, k1 + k2 = 80% with the regulation hyperparameter p = 1/15. The rest defenses remain the original settings. |