Decentralized Stochastic Optimization and Gossip Algorithms with Compressed Communication
Authors: Anastasia Koloskova, Sebastian Stich, Martin Jaggi
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We (iii) show in experiments that both of our algorithms do outperform the respective state-of-the-art baselines and CHOCO-SGD can reduce communication by at least two orders of magnitudes. |
| Researcher Affiliation | Academia | 1EPFL, Lausanne, Switzerland. |
| Pseudocode | Yes | Algorithm 1 CHOCO-GOSSIP |
| Open Source Code | No | The paper does not provide an explicit statement about releasing the source code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | Datasets. We rely on the epsilon (Sonnenburg et al., 2008) and rcv1 (Lewis et al., 2004) datasets (cf. Table 2). |
| Dataset Splits | No | The paper describes how data samples are distributed among workers and different data settings (randomly shuffled, sorted) but does not specify explicit training, validation, or test dataset splits (e.g., percentages or sample counts) used in the experiments. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions using 'scikit-learn (Pedregosa et al., 2011)' but does not provide specific version numbers for this or any other software dependencies. |
| Experiment Setup | Yes | Table 4 lists 'SGD learning rates ηt = ma / (t+b) and consensus learning rates γ used in the experiments in Figs. 5 6.' providing specific hyperparameter values for different algorithms and compression schemes. |