Faster and Non-ergodic O(1/K) Stochastic Alternating Direction Method of Multipliers

Authors: Cong Fang, Feng Cheng, Zhouchen Lin

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results demonstrate that our algorithm is faster than the existing state-of-the-art stochastic ADMM methods. We conduct experiments to show the effectiveness of our method.
Researcher Affiliation Academia Cong Fang Feng Cheng Zhouchen Lin Key Laboratory of Machine Perception (MOE), School of EECS, Peking University, P. R. China Cooperative Medianet Innovation Center, Shanghai Jiao Tong University, P. R. China fangcong@pku.edu.cn fengcheng@pku.edu.cn zlin@pku.edu.cn
Pseudocode Yes Algorithm 1 Inner loop of ACC-SADMM. Algorithm 2 ACC-SADMM.
Open Source Code No The code will be available at http://www.cis.pku.edu.cn/faculty/vision/zlin/zlin.htm. (This states future availability, not current release.)
Open Datasets Yes The experiments are performed on four benchmark data sets: a9a, covertype, mnist and dna. a9a, covertype and dna are from: http://www.csie.ntu.edu.cn/~cjlin/libsvmtools/datasets/, and mnist is from: http://yann.lecun.com/exdb/mnist/.
Dataset Splits No Table 3 provides '# training' and '# testing' columns for the datasets, but no explicit 'validation' split information is mentioned.
Hardware Specification Yes Experiments are performed on Intel(R) CPU i7-4770 @ 3.40GHz machine with 16 GB memory.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes And like [3] and [4], we fix µ = 10 5 and report the performance based on (xt, Axt) to satisfy the constraints of ADMM. And we set m = 2n b for all the algorithms.