Fast Nonsmooth Regularized Risk Minimization with Continuation

Authors: Shuai Zheng, Ruiliang Zhang, James T. Kwok

AAAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on nonsmooth classification and regression tasks demonstrate that the proposed algorithm outperforms the state-of-the-art.
Researcher Affiliation Academia Shuai Zheng, Ruiliang Zhang, James T. Kwok Department of Computer Science and Engineering Hong Kong University of Science and Technology Hong Kong {szhengac, rzhangaf, jamesk}@cse.ust.hk
Pseudocode Yes Algorithm 1 CNS algorithm for strongly convex problems.
Open Source Code No The paper does not provide any explicit statement or link regarding the public availability of its source code.
Open Datasets Yes We only report results on two data sets (Table 3) from the LIBSVM archive: (i) the popularly used classification data set rcv1; and (ii) Year Prediction MSD, the largest regression data in the LIBSVM archive, and is a subset of the Million Song data set.
Dataset Splits Yes ν1, ν2 are tuned by 5-fold cross-validation. For each method, the stepsize is tuned by running on a subset containing 20% training data for a few epochs (for the proposed method, we tune η1).
Hardware Specification No The paper does not specify any particular hardware details such as GPU models, CPU types, or cloud computing resources used for the experiments.
Software Dependencies No All algorithms are implemented in Matlab. The paper does not provide specific version numbers for Matlab or any other software dependencies.
Experiment Setup Yes The mini-batch size b is 50 for rcv1, and 100 for Year Prediction MSD. We set γ1 = 0.01, τ = 2, and T1 = n/b. For each method, the stepsize is tuned by running on a subset containing 20% training data for a few epochs (for the proposed method, we tune η1). We set λ1 in Algorithm 2 to 10 5 for rcv1, and 10 7 for Year Prediction MSD.