Adaptive Consensus ADMM for Distributed Optimization
Authors: Zheng Xu, Gavin Taylor, Hao Li, Mário A. T. Figueiredo, Xiaoming Yuan, Tom Goldstein
ICML 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We now study the performance of ACADMM on benchmark problems, and compare to other methods. (Section 6) and Table 1 reports the convergence speed in iterations and wall-clock time (secs) for various test cases. |
| Researcher Affiliation | Academia | 1University of Maryland, College Park; 2United States Naval Academy, Annapolis; 3Instituto de Telecomunicac oes, IST, ULisboa, Portugal; 4Hong Kong Baptist University, Hong Kong. |
| Pseudocode | Yes | Algorithm 1 Adaptive consensus ADMM (ACADMM) (Section 5.4) |
| Open Source Code | No | No explicit statement or link indicating that the source code for the proposed method (ACADMM) is publicly available was found. The paper mentions existing open-source libraries or tools as references (e.g., LIBSVM) but not its own implementation. |
| Open Datasets | Yes | We also acquire large empirical datasets from the LIBSVM webpage (Liu et al., 2009), as well as MNIST digital images (Le Cun et al., 1998), and CIFAR10 object images (Krizhevsky & Hinton, 2009). (Section 6.2) and We use a graph from the Seventh DIMACS Implementation Challenge on Semidefinite and Related Optimization Problems following (Burer & Monteiro, 2003) for Semidefinite Programming (SDP). (Section 6.2) |
| Dataset Splits | No | The paper uses standard benchmark datasets like MNIST and CIFAR10, which typically have predefined splits. However, the paper does not explicitly state the specific percentages, sample counts, or reference the standard splits for training, validation, or test sets in its text. |
| Hardware Specification | Yes | These experiments are performed with 128 cores on a Cray XC-30 supercomputer. (Section 6.3) |
| Software Dependencies | No | No specific software dependencies with version numbers were explicitly mentioned. The paper references algorithms like L-BFGS (Liu & Nocedal, 1989) and dual coordinate ascent (Chang & Lin, 2011) but not their implementation details or specific software environments (e.g., Python, PyTorch, TensorFlow) or their versions. |
| Experiment Setup | Yes | The regularization parameter is fixed at ρ = 10 in all experiments. (Section 6.2) and The initial penalty is fixed at τ0 = 1 for all methods unless otherwise specified. (Section 6.2) and We suggest updating the stepsize every Tf = 2 iterations, fixing the safeguarding threshold ϵcor = 0.2, and choosing a large convergence constant Ccg = 10^10. (Section 5.4) |