PrivSGP-VR: Differentially Private Variance-Reduced Stochastic Gradient Push with Tight Utility Bounds
Authors: Zehan Zhu, Yan Huang, Xin Wang, Jinming Xu
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments corroborate our theoretical findings, especially in terms of the maximized utility with optimized K, in fully decentralized settings. |
| Researcher Affiliation | Academia | 1Zhejiang University, Hangzhou, China 2Qilu University of Technology, Jinan, China |
| Pseudocode | Yes | Algorithm 1 Priv SGP-VR |
| Open Source Code | No | The paper does not explicitly state that its source code is open-sourced or provide a direct link to a code repository. It only refers to a 'full version' on arXiv. |
| Open Datasets | Yes | We consider two non-convex learning tasks (i.e., deep CNN Res Net-18 [He et al., 2016] training on Cifar-10 dataset [Krizhevsky, 2009] and shallow 2-layer neural network training on Mnist dataset [Deng, 2012]), in fully decentralized setting. |
| Dataset Splits | No | The paper states 'For all experiments, we split shuffled datasets evenly to n nodes.' but does not provide specific training/validation/test split percentages, absolute sample counts, or explicit methodology for these splits. |
| Hardware Specification | Yes | All experiments are deployed in a high performance computer with Intel Xeon E5-2680 v4 CPU @ 2.40GHz and 8 Nvidia RTX 3090 GPUs |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'torch.distributed' but does not specify their version numbers. |
| Experiment Setup | Yes | All configurations utilize the same DP Gaussian noise variance σ2 i = 0.03 for each node i. For each node i, we set the privacy budget to ϵi = 3 and δi = 10 5. ... we apply DP Gaussian noise with an identical variance of σ2 i = 0.03 for both Priv SGP-VR and Priv SGP. Moreover, both algorithms were executed for a fixed number of 3000 iterations. |