A Linearly Convergent Proximal Gradient Algorithm for Decentralized Optimization
Authors: Sulaiman Alghunaim, Kun Yuan, Ali H. Sayed
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We remark that simulations of the proposed algorithm are provided in Section B in the supplementary material. |
| Researcher Affiliation | Academia | Sulaiman A. Alghunaim, Kun Yuan Electrical and Computer Engineering Department University of California Los Angeles Los Angeles, CA, 90095 {salghunaim,kunyuan}@ucla.edu Ali H. Sayed Ecole Polytechnique Fédérale de Lausanne CH-1015 Lausanne, Switzerland ali.sayed@epfl.ch |
| Pseudocode | Yes | Algorithm (Proximal Primal-Dual Diffusion P2D2) Setting: Let B = 0.5(I A) = [bsk] and choose step-sizes µ and α. Set all initial variables to zero and repeat for i = 1, 2,... |
| Open Source Code | No | The paper does not provide any concrete access (link or explicit statement) to the source code for the methodology described. |
| Open Datasets | No | The paper mentions simulations but does not provide any information about specific datasets used or their public availability. |
| Dataset Splits | No | The paper does not provide information on dataset splits (train, validation, test). |
| Hardware Specification | No | The paper does not specify any hardware details (GPU/CPU models, memory, etc.) used for running experiments. |
| Software Dependencies | No | The paper does not list any specific software dependencies with version numbers. |
| Experiment Setup | No | The paper defines tunable step-sizes µ and α within the algorithm, but it does not provide specific hyperparameter values, training configurations, or system-level settings within an experimental setup description. |