ADMM without a Fixed Penalty Parameter: Faster Convergence with New Adaptive Penalization
Authors: Yi Xu, Mingrui Liu, Qihang Lin, Tianbao Yang
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we present some experimental results of the proposed algorithms for solving three tasks, namely generalized LASSO, robust regression with a low-rank regularizer (RR-LR) and learning low-rank representation. |
| Researcher Affiliation | Academia | Department of Computer Science, The University of Iowa, Iowa City, IA 52242, USA Department of Management Sciences, The University of Iowa, Iowa City, IA 52242, USA {yi-xu, mingrui-liu, qihang-lin, tianbao-yang}@uiowa.edu |
| Pseudocode | Yes | Algorithm 1 ADMM(x0, β, t); Algorithm 2 LA-ADMM (x0, β1, K, t); Algorithm 3 SADMM(x0, η, β, t, Ω); Algorithm 4 LA-SADMM (x0, η1, β1, D1, K, t); Algorithm 5 LA-ADMM with Restarting; Algorithm 6 LA-SADMM with Restarting |
| Open Source Code | No | The paper does not contain any explicit statement about releasing the source code for the described methodology or provide a link to a code repository. |
| Open Datasets | Yes | We choose two medium-scale data sets from libsvm website, namely w8a data (n = 49749, d = 300) and gisette data (n = 6000, d = 5000), to conduct the experiment. |
| Dataset Splits | No | The paper mentions using 'training data' and refers to specific datasets, but it does not provide explicit details about the dataset splits (e.g., percentages or sample counts for training, validation, and testing) that would be needed for reproduction. |
| Hardware Specification | No | The paper describes running experiments and generating synthetic data but does not provide any specific details regarding the hardware used (e.g., CPU, GPU models, or cloud infrastructure). |
| Software Dependencies | No | The paper mentions using existing methods and tools like 'graphical lasso' and datasets from 'libsvm website,' but it does not specify any software versions or dependencies (e.g., Python, PyTorch, or specific library versions) required to reproduce the experiments. |
| Experiment Setup | Yes | For SADMM, we tune both η1 and β from {10 5:1:5} . For LA-SADMM, we set the initial step size and penalty parameter to their theoretical value in Theorem 4, and select D1 from {100, 1000}. The values of t in LA-SADMM is set to 105 and 5 104 for w8a and gisette, respectively. ... We set λ = 100. ... we choose β = 0.001 as the initial step size for LA-ADMM and ADMM-AP. ... start with the number of inner iterations t = 2 and increase its value by a factor 2 after 10 stages, and also increase the value of β by 10 times after each stage. |