Decentralized Optimization with Edge Sampling

Authors: Chi Zhang, Qianxiao Li, Peilin Zhao

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental These theoretical findings are validated by both numerical experiments on the mixing rates of Markov Chains and distributed machine learning problems. We have theoretically shown the DDA-ES algorithm achieves of goal of reducing the communication cost on each round while accelerating the overall convergence rates under the same communication budget, and we shall validate our findings with numerical experiments in this part.
Researcher Affiliation Collaboration 1Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly, Nanyang Technological University, Singapore 2 IHPC, Agency for Science, Technology and Research, Singapore 3Tencent AI Lab, China
Pseudocode Yes Algorithm 1 Distributed Dual Averaging with Edge Sampling (DDA-ES)
Open Source Code No The paper does not provide a specific repository link or an explicit statement about the release of the source code for the methodology described.
Open Datasets Yes We now consider a distributed optimization problem for the a9a dataset3. 3https://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets/
Dataset Splits No The paper references the 'a9a dataset' but does not provide specific details on the train, validation, or test dataset splits, percentages, or sample counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not specify version numbers for any software components, libraries, or solvers used in the experiments.
Experiment Setup Yes For all algorithms, we set ηt = O(1/√t) as suggested in the previous theoretical analysis. with proximal function in Eq (3) set as ψ(w) = λ/2 ||w||^2 and λ = 10^-3.