LAG: Lazily Aggregated Gradient for Communication-Efficient Distributed Learning
Authors: Tianyi Chen, Georgios Giannakis, Tao Sun, Wotao Yin
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Numerical experiments on both synthetic and real data corroborate a significant communication reduction compared to alternatives. |
| Researcher Affiliation | Academia | University of Minnesota Twin Cities, Minneapolis, MN 55455, USA National University of Defense Technology, Changsha, Hunan 410073, China University of California Los Angeles, Los Angeles, CA 90095, USA |
| Pseudocode | Yes | Algorithm 1 LAG-WK, Algorithm 2 LAG-PS |
| Open Source Code | No | The paper does not provide an explicit statement or link to open-source code for the described methodology. |
| Open Datasets | Yes | The paper states: "Performance is also tested on the real datasets [2]: a) linear regression using Housing, Body fat, Abalone datasets; and, b) logistic regression using Ionosphere, Adult, Derm datasets". It also cites "M. Lichman, UCI machine learning repository, 2013. [Online]. Available: http://archive.ics.uci.edu/ml" as [36], which is a well-known public repository. |
| Dataset Splits | No | The paper mentions "Each dataset is evenly split into three workers" but does not provide specific training, validation, or test dataset split percentages, counts, or methodology. |
| Hardware Specification | Yes | All experiments were performed using MATLAB on an Intel CPU @ 3.4 GHz (32 GB RAM) desktop. |
| Software Dependencies | No | The paper states "All experiments were performed using MATLAB" but does not specify a version number for MATLAB or any other software dependencies with version numbers. |
| Experiment Setup | Yes | Stepsizes for LAG-WK, LAG-PS, and GD are chosen as α = 1/L; to optimize performance and guarantee stability, α = 1/(ML) is used in Cyc-IAG and Num-IAG. For LAG-WK, we choose ξd = ξ = 1/D with D = 10, and for LAG-PS, we choose more aggressive ξd = ξ = 10/D with D = 10. For logistic regression, the regularization parameter is set to λ = 10−3. |