Fast-and-Light Stochastic ADMM
Authors: Shuai Zheng, James T. Kwok
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments |
| Researcher Affiliation | Academia | Shuai Zheng James T. Kwok Department of Computer Science and Engineering Hong Kong University of Science and Technology Hong Kong {szhengac, jamesk}@cse.ust.hk |
| Pseudocode | Yes | Algorithm 1 SVRG-ADMM for strongly convex problems. |
| Open Source Code | No | No explicit statement regarding the release of source code for the described methodology or a direct link to a code repository is provided. |
| Open Datasets | Yes | Downloaded from http://www.csie.ntu.edu.tw/ cjlin/ libsvmtools/datasets/, http://osmot.cs.cornell.edu/kddcup/datasets. html, and http://largescale.ml.tu-berlin.de/instructions/. |
| Dataset Splits | No | The paper provides training and testing set sizes (e.g., 'We use 1,281,167 images for training, and 50, 000 images for testing' and Table 2), but does not explicitly detail a separate validation split or the methodology for creating these splits (e.g., random seed, stratified splitting). |
| Hardware Specification | Yes | Experiments are performed on a PC with Intel i7-3770 3.4GHz CPU and 32GB RAM |
| Software Dependencies | No | The paper mentions 'Matlab' as the environment used for comparison, but does not provide specific version numbers for Matlab or any other key software libraries/dependencies. |
| Experiment Setup | Yes | We use a mini-batch size of b = 100 on protein and covertype; and b = 500 on mnist8m and dna. The proposed SVRG-ADMM uses the linearized update in (12) and m = 2n/b. For SVRG-ADMM, since the learning rate in (12) is effectively /γ, we set γ = 1 and only tune . All parameters are tuned as in [Zhong and Kwok, 2014]. We set λ1 = 10 5, λ2 = 10 4, and use a mini-batch size b = 500. We set n = 100, 000, d = 500, λ = 0.1/pn, and a minibatch size b = 100. |