A General Analysis of the Convergence of ADMM
Authors: Robert Nishihara, Laurent Lessard, Ben Recht, Andrew Packard, Michael Jordan
ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On a numerical example, we demonstrate that minimizing the derived bound on the convergence rate provides a practical approach to selecting algorithm parameters for particular ADMM instances. We complement our upper bound by constructing a nearly-matching lower bound on the worst-case rate of convergence. |
| Researcher Affiliation | Academia | Robert Nishihara RKN@EECS.BERKELEY.EDU Laurent Lessard LESSARD@BERKELEY.EDU Benjamin Recht BRECHT@EECS.BERKELEY.EDU Andrew Packard APACKARD@BERKELEY.EDU Michael I. Jordan JORDAN@EECS.BERKELEY.EDU University of California, Berkeley, CA 94720 USA |
| Pseudocode | Yes | Algorithm 1 Alternating Direction Method of Multipliers; Algorithm 2 Over-Relaxed Alternating Direction Method of Multipliers |
| Open Source Code | No | No explicit statement or link providing access to the source code for the methodology described in the paper was found. |
| Open Datasets | No | The paper uses a synthetically generated dataset for a distributed Lasso problem: 'Each Ai is generated by populating a 600 500 matrix with independent standard normal entries and normalizing the columns. We generate each bi via bi = Aix0 + εi, where x0 is a sparse 500-dimensional vector with 250 independent standard normal entries, and εi N(0, 10 3I).' This is not a publicly available dataset with concrete access information. |
| Dataset Splits | No | The paper describes numerical examples on a synthetically generated dataset to demonstrate convergence, but it does not specify any training, validation, or test dataset splits. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., programming languages, libraries, or solvers with their exact versions) used for the implementation or experiments. |
| Experiment Setup | Yes | For the distributed Lasso problem, the paper specifies experimental parameters and data generation details: 'we choose N = 5 and µ = 0.1. Each Ai is generated by populating a 600 500 matrix with independent standard normal entries and normalizing the columns. We generate each bi via bi = Aix0 + εi, where x0 is a sparse 500-dimensional vector with 250 independent standard normal entries, and εi N(0, 10 3I).' |