Manifold Identification for Ultimately Communication-Efficient Distributed Optimization
Authors: Yu-Sheng Li, Wei-Lin Chiang, Ching-Pei Lee
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments show that our method can be orders of magnitudes lower in the communication cost and an order of magnitude faster in the running time than the state of the art. |
| Researcher Affiliation | Academia | 1Department of Computer Science, National Taiwan University, Taipei, Taiwan 2Department of Mathematics & Institute for Mathematical Sciences, National University of Singapore, Singapore. |
| Pseudocode | Yes | Algorithm 1: MADPQN: An ultimately communicationefficient two-stage manifold identification method for (1) |
| Open Source Code | Yes | Based on this study, we have released a package for use at http://www.github.com/leepei/madpqn/. |
| Open Datasets | Yes | We consider public datasets listed in Table 1 (from http://www.csie.ntu.edu.tw/~cjlin/ libsvmtools/datasets) |
| Dataset Splits | No | The paper does not explicitly provide details about training/validation/test dataset splits. It only mentions using public datasets and evaluates convergence against an optimal function value. |
| Hardware Specification | Yes | For news20, we use 32 t2.micro instances, while for the remaining two larger datasets, we use 32 r5n.large instances. |
| Software Dependencies | No | The paper mentions that comparison methods are "implemented in C/C++" and uses "Open MPI", but it does not provide specific version numbers for any key software components or libraries required for reproduction. |
| Experiment Setup | Yes | We use the following fixed parameters for MADPQN throughout the experiments, and it works quite robustly, so there is no need to tune these parameters: θ = 2, m = 10, T = 2, S = 10, σ1 = 10 4, ϵ = 10 10, δ = 10 10, ϵ = 10 14, ϵj = max{ϵ, 10 4 3j}. |