Stabilized Proximal-Point Methods for Federated Optimization
Authors: Xiaowen Jiang, Anton Rodomanov, Sebastian U. Stich
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 6 Numerical Experiments In this section, we illustrate the performance of our methods in numerical experiments. The implementation can be found at https://github.com/mlolab/S-DANE. |
| Researcher Affiliation | Academia | Xiaowen Jiang Saarland University and CISPA xiaowen.jiang@cispa.de Anton Rodomanov CISPA anton.rodomanov@cispa.de Sebastian U. Stich CISPA stich@cispa.de CISPA Helmholtz Center for Information Security, Saarbrücken, Germany |
| Pseudocode | Yes | Algorithm 1 S-DANE: Stabilized DANE |
| Open Source Code | Yes | The implementation can be found at https://github.com/mlolab/S-DANE. |
| Open Datasets | Yes | We use the ijcnn dataset from LIBSVM [6]. Lastly, we consider the multi-class classification tasks with CIFAR10 [34] using Res Net-18 [18]. |
| Dataset Splits | No | The paper describes data splitting methods (Dirichlet distribution for ijcnn and CIFAR10) but does not provide explicit percentages or sizes for training, validation, and test sets, nor does it refer to standard splits with citations for reproducibility for all experiments. |
| Hardware Specification | Yes | We simulate the experiment on one NVIDIA DGX A100. The other experiments are run on a Mac Book Pro laptop. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | We use SGD with a batch size of 512 as a local solver for each device. For all the methods considered in Figure 3, we choose the best number of local steps among {10, 20, . . . 80} (for SCAFFNEW, this becomes the inverse of the probability) and the best learning rate among {0.02, 0.05, 0.1}. |