Adaptive Stochastic Alternating Direction Method of Multipliers

Authors: Peilin Zhao, Jinwei Yang, Tong Zhang, Ping Li

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Encouraging empirical results on a variety of real-world datasets confirm the effectiveness and efficiency of the proposed algorithms.
Researcher Affiliation Collaboration Peilin Zhao , ZHAOP@I2R.A-STAR.EDU.SG Jinwei Yang JYANG7@ND.EDU Tong Zhang TZHANG@STAT.RUTGERS.EDU Ping Li PINGLI@STAT.RUTGERS.EDU Data Analytics Department, Institute for Infocomm Research, A*STAR, Singapore Department of Mathematics, Rutgers University; and Department of Mathematics, University of Notre Dame, USA Department of Statistics & Biostatistics, Rutgers University, USA; and Big Data Lab, Baidu Research, China Department of Statistics & Biostatistics, Department of Computer Science, Rutgers University; and Baidu Research, USA
Pseudocode Yes Algorithm 1 Adaptive Stochastic Alternating Direction Method of Multipliers (Ada-SADMM). Algorithm 2 Adaptive Stochastic ADMM with Diagonal Matrix Update (Ada-SADMMdiag). Algorithm 3 Adaptive Stochastic ADMM with Full Matrix Update (Ada-SADMMfull).
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes To examine the performance, we test all the algorithms on six real-world datasets from web machine learning repositories, which are listed in the Table 1. The news20 dataset was downloaded from www.cs.nyu.edu/ roweis/data.html. All other datasets were downloaded from the LIBSVM website.
Dataset Splits Yes For each dataset, we randomly divide it into two folds: training set with 80% of examples and test set with the rest.
Hardware Specification No All experiments were run in Matlab over a machine of 3.4GHz CPU. This description is not specific enough to identify the hardware model (e.g., Intel Core i7, Xeon), family, or number of cores.
Software Dependencies No All experiments were run in Matlab over a machine of 3.4GHz CPU. This only mentions 'Matlab' without any version number or specific libraries used.
Experiment Setup Yes In particular, we set the penalty parameter γ = ν = 1/n, where n is the number of training examples, and the trade-off parameter β = 1. In addition, we set the step size parameter ηt = 1/(γt) for SADMM according to the theorem 2 in (Ouyang et al., 2013). Finally, the smooth parameter a is set as 1, and the step size for adaptive stochastic ADMM algorithms are searched from 2[ 5:5] using cross validation.