Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Asynchronous Distributed ADMM for Consensus Optimization

Authors: Ruiliang Zhang, James Kwok

ICML 2014 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on different distributed ADMM applications show that asynchrony reduces the time on network waiting, and achieves faster convergence than its synchronous counterpart in terms of the wall clock time. In this section, we perform experiments on three different ADMM applications
Researcher Affiliation Academia Ruiliang Zhang EMAIL James T. Kwok EMAIL Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong
Pseudocode Yes Algorithm 1 Synchronous ADMM (sync-ADMM): Processing by the master. ... Algorithm 4 Asynchronous ADMM (async-ADMM): Processing by worker i.
Open Source Code No The paper discusses implementation details (C++, Armadillo, MPICH) but does not provide a specific link or explicit statement about the availability of its own source code.
Open Datasets Yes We use the digits 4 and 9 from the MNIST-8M7 data set... 7http://www.csie.ntu.edu.tw/~cjlin/libsvmtools/datasets
Dataset Splits No The paper mentions data partitioning for workers but does not provide specific details on training, validation, or test dataset splits (e.g., percentages, counts, or specific cross-validation schemes) needed for reproduction.
Hardware Specification Yes We use a cluster of 18 computing nodes interconnected with a gigabit Ethernet. Each node has 4 AMD Opteron 2216 (2.4GHz) processors and 16GB memory.
Software Dependencies Yes The algorithms are implemented in C++, with the Armadillo v3.920.3 library linked to LAPACK/BLAS for efficient computation. Moreover, the Message Passing Interface (MPI) implementation MPICH v3.0.4 is used for interprocessor communication.
Experiment Setup Yes The proposed algorithm has a simple structure and good convergence guarantees (its convergence rate can be reduced to that of its synchronous counterpart). In this experiment, we have N = 16 workers... (Section 5.1) with S = 2 and τ = 32). (Section 5.2) we set m = 10000, n = 64000 and r = 100. ... we set λ1 = λ2 = 1. (Section 5.3)