Decentralized Accelerated Proximal Gradient Descent
Authors: Haishan Ye, Ziang Zhou, Luo Luo, Tong Zhang
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our empirical study shows that the proposed algorithm outperforms existing state-of-the-art algorithms. |
| Researcher Affiliation | Academia | 1Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen 2Department of Mathematics, The Hong Kong University of Science and Technology |
| Pseudocode | Yes | Algorithm 1 DPAG; Algorithm 2 Fast Mix |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | No | The paper mentions using 'datasets w8a and w9a which can be downloaded in libsvm datasets.' but does not provide a specific link, DOI, or formal citation with authors/year for public access. |
| Dataset Splits | No | The paper does not provide specific dataset split information, only mentioning the total number of agents and data points. |
| Hardware Specification | No | The paper does not specify any hardware details like GPU/CPU models or types of machines used for experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | Yes | Experiments Setting In our experiments, we consider random networks where each pair of agents have a connection with a probability of p = 0.1. We set W = I L λ1(L) where L is the Laplacian matrix associated with a weighted graph, and λ1(L) is the largest eigenvalue of L. We set m = 100, that is, there exists 100 agents in this network. In our experiments, the gossip matrix W satisfies 1 λ2(W) = 0.05. ... We set σ1 = 10 4 for all datasets and set σ2 as 10 3, 10 4 and 10 5 to control the condition number of the objective function. ... In the experiments, we set K = 1, 2, 3 respectively. |