Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Efficient Learning with a Family of Nonconvex Regularizers by Redistributing Nonconvexity
Authors: Quanming Yao, James T. Kwok
JMLR 2017 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we perform experiments on using the proposed procedure with (i) proximal algorithms (Sections 5.1 and 5.2); (ii) Frank-Wolfe algorithm (Section 5.3); (iii) comparision with HONOR (Section 5.4) and (vi) image denoising (Section 5.5). Experiments are performed on a PC with Intel i7 CPU and 32GB memory. All algorithms are implemented in Matlab. |
| Researcher Affiliation | Academia | Quanming Yao EMAIL James T. Kwok EMAIL Department of Computer Science and Engineering Hong Kong University of Science and Technology Hong Kong |
| Pseudocode | Yes | Algorithm 1 Frank-Wolfe algorithm for problem (8) with f convex (Zhang et al., 2012). Algorithm 2 Nonmonotonic APG (nm APG) (Li and Lin, 2015). Algorithm 3 Inexact nm APG. Algorithm 4 Frank-Wolfe algorithm for solving the nonconvex problem (31). Algorithm 5 warmstart(Ut, ut, Vt, vt, Bt, αt, βt). |
| Open Source Code | No | All algorithms are implemented in Matlab. |
| Open Datasets | Yes | we use the face data set JAFFE2, which contains 213 256 256 images with seven facial expressions... http://www.kasrl.org/jaffe.html We use the data sets Movie Lens, Netflix and Yahoo, which have been commonly used for evaluating matrix completion (Mazumder et al., 2010; Wen et al., 2012; Hsieh and Olsen, 2014). three large data sets, kdd2010a, kdd2010b and url (Table 8). ... https://www.csie.ntu.edu.tw/ cjlin/libsvmtools/datasets/binary.html Eight popular images are used (Figure 16). http://www.cs.tut.fi/ foi/GCF-BM3D/ |
| Dataset Splits | Yes | We use 50% of the data for training, another 25% as validation set to tune λ, µ in (58), and the rest for testing. We use 60% of the data for training, 20% as validation set to tune µ, and the rest for testing. Following (Yao et al., 2015), we use 50% of the ratings for training, 25% for validation and the rest for testing. We randomly use 50% of the observed ratings for training, 25% for validation and the rest for testing. |
| Hardware Specification | Yes | Experiments are performed on a PC with Intel i7 CPU and 32GB memory. |
| Software Dependencies | No | All algorithms are implemented in Matlab. |
| Experiment Setup | Yes | The stepsize is fixed at τ = σ1(A A). For performance evaluation, we use the (i) testing root-mean-squared error (RMSE) on the predictions... The stepsize η is obtained by line search. For performance evaluation, we use (i) the testing accuracy... Following (Gong and Ye, 2015a), we fix µ = 1 in (43), and θ in the LSP regularizer to 0.01µ. ...The threshold of the hybrid step in HONOR is set to 10^-10... The LSP function (with θ = 1) is used as κ in (46) on both the loss and regularizer. Eight popular images are used (Figure 16). ...To tune µ, we pick the value that leads to the smallest RMSE on the first four images... In the experiment, we set λ0 = 0.1 and ν = 0.95. |