EControl: Fast Distributed Optimization with Compression and Error Control
Authors: Yuan Gao, Rustem Islamov, Sebastian U Stich
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive numerical evaluations to illustrate the efficacy of our method and support our theoretical findings. |
| Researcher Affiliation | Collaboration | 1Universität des Saarlandes, 2Universität Basel, 3CISPA Helmholtz Center for Information Security |
| Pseudocode | Yes | Algorithm 1 EC-Ideal; Algorithm 2 EControl |
| Open Source Code | Yes | We provide source code as part of the supplementary material which allows to reproduce Deep Learning and synthetic least squares experiments. |
| Open Datasets | Yes | MNIST dataset (Deng, 2012); Cifar-10 (Krizhevsky et al., 2014) dataset. |
| Dataset Splits | No | The paper mentions 'train (90%) and test (10%) sets' for the Logistic Regression problem and for Cifar-10, but no explicit validation set split is described. |
| Hardware Specification | No | No specific hardware (e.g., GPU/CPU models, memory details, or cloud instance types) used for experiments is mentioned in the paper. |
| Software Dependencies | No | The implementation is done in Pytorch (Paszke et al., 2019) but no specific version number for PyTorch or other software dependencies is provided. |
| Experiment Setup | Yes | Stepsizes were tuned for each setting. X-axis represents the number of bits sent. ... For EControl we fine-tune η over {10^−3, 5 × 10^−3, 10^−2, 5 × 10^−2, 10^−1}. ... We fine-tune the stepsizes of the methods over {1, 10^−1, 10^−2, 10^−3}. Moreover, we fine-tune η parameter for EControl over {0.2, 0.1, 0.05}. |