Towards More Efficient Stochastic Decentralized Learning: Faster Convergence and Sparse Communication
Authors: Zebang Shen, Aryan Mokhtari, Tengfei Zhou, Peilin Zhao, Hui Qian
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on convex minimization and AUC-maximization validate the efficiency of our method. and In this section, we evaluate the empirical performance of DSBA and compare it with several state-of-the-art methods including: DSA, EXTRA, SSDA, and DLM. |
| Researcher Affiliation | Collaboration | 1Zhejiang University 2Tencent AI Lab 3Massachusetts Institute of Technology 4South China University of Technology. |
| Pseudocode | Yes | Algorithm 1 DSBA for node n and Algorithm 2 Computation on node 0 at iteration t |
| Open Source Code | No | The paper does not provide concrete access to source code or a statement about its public availability. |
| Open Datasets | Yes | As to dataset, we use News20-binary, RCV1, and Sector from LIBSVM dataset |
| Dataset Splits | No | The paper states it "randomly split the them into N partitions with equal sizes" but does not provide specific train/validation/test split percentages, sample counts, or references to predefined splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU, GPU models, or memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific details regarding software dependencies with version numbers (e.g., programming languages, libraries, or frameworks). |
| Experiment Setup | Yes | We tune the step size of all algorithms and select the ones that give the best performance. and The ℓ2-regularization parameter λ is set to 1/(10Q) in all cases. |