Locally Differentially Private Decentralized Stochastic Bilevel Optimization with Guaranteed Convergence Accuracy
Authors: Ziqin Chen, Yongqiang Wang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on machine learning models confirm the efficacy of our algorithm. We conduct experiment evaluation using several machine learning problems. The results confirm the efficiency of our algorithm on both the synthetic and the real-world datasets. |
| Researcher Affiliation | Academia | 1Department of Electrical and Computer Engineering, Clemson University, Clemson, SC, 29634, United States. Correspondence to: Yongqiang Wang <yongqiw@clemson.edu>. |
| Pseudocode | Yes | Algorithm 1 Subroutine for Estimating Hessian-Inverse Vector Product for Agent i, i [m] and Algorithm 2 LDP Design for DSBO Algorithm for Agent i, i [m] |
| Open Source Code | No | No statement about releasing code or links to repositories. |
| Open Datasets | Yes | MNIST We evaluated the performance of Algorithm 2 by using the MNIST dataset (Grazzi et al., 2020). In this experiment, we evaluated the performance of Algorithm 2 on the CIFAR-10 dataset (Krizhevsky et al., 2010). |
| Dataset Splits | Yes | In the MNIST experiments of Section 5.1, both the training and validation datasets consist of 60, 000 images, and the test dataset contains 10, 000 images. each task involving a training dataset Dtra q and a validation dataset Dval q , both designed for 5-way classification with 50-shot for each class. |
| Hardware Specification | No | No specific hardware details are mentioned. |
| Software Dependencies | No | No specific software dependencies with version numbers are mentioned. |
| Experiment Setup | Yes | In the synthetic-data experiment of Section 5.1, the stepsizes for Algorithm 2 were set to λx,t = 0.05 (t+1)0.95 , λy,t = 0.05 (t+1)0.87 , and λz,t = 0.02 (t+1)0.75 . Each element of DP-noise vectors χi,t, ζi,t, and ϑi,t for agent i follows Laplace distributions Lap 1 2(t+1)0.8+0.01i , Lap 1 2(t+1)0.76+0.01i , and Lap 1 2(t+1)0.6+0.01i , respectively. The algorithm was executed for 100 iterations, with each agent randomly selecting 50 training samples in every iteration. The training is conducted over 6 epochs. For Algorithm 2, the stepsizes were set to λx,t = 1.2 (t+1)0.95 , λy,t = 1.2 (t+1)0.87 , and λz,t = 1.2 (t+1)0.75 . |