Scalable Robust Matrix Factorization with Nonconvex Loss
Authors: Quanming Yao, James Kwok
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments show that it outperforms the state-of-the-art in terms of both accuracy and speed. Extensive experiments on both synthetic and real-world data sets demonstrate superiority of the proposed algorithm over the state-of-the-art in terms of both accuracy and scalability. |
| Researcher Affiliation | Collaboration | Quanming Yao1,2, James T. Kwok2 14Paradigm Inc. Beijing, China 2Department of Computer Science and Engineering, Hong Kong University of Science and Technology, Hong Kong |
| Pseudocode | Yes | Algorithm 1 Robust matrix factorization using nonconvex loss (RMFNL) algorithm. |
| Open Source Code | No | The paper states, "All the codes are in Matlab, with sparse matrix operations implemented in C++." but does not provide an explicit statement of code release or a link. |
| Open Datasets | Yes | Experiments are performed on the popular Movie Lens recommender data sets: Movie Lens-100K, Movie Lens-1M, and Movie Lens-10M (Some statistics on these data sets are in Appendix E.1). We use the Oxford Dinosaur sequence, which has 36 images and 4, 983 feature points. |
| Dataset Splits | Yes | We randomly draw 10 log(m)/m% of the elements from M as observations, with half of them for training and the other half for validation. The remaining unobserved elements are for testing. |
| Hardware Specification | Yes | Experiments are performed on a PC with Intel i7 CPU and 32GB RAM. |
| Software Dependencies | No | The paper mentions "Matlab" and "C++" but does not specify version numbers for the software or any libraries used. |
| Experiment Setup | Yes | The iterate (U 1, V 1) is initialized as Gaussian random matrices, and the iterative procedure is stopped when the relative change in objective values between successive iterations is smaller than 10 4. For the subproblems in RMF-MM and RMFNL, iteration is stopped when the relative change in objective value is smaller than 10 6 or a maximum of 300 iterations is used. The APG stepsize is determined by line-search, and adaptive restart is used for further speedup [32]. We use the nonconvex loss functions of LSP, Geman and Laplace in Table 5 of Appendix A, with θ = 1; and fix λ = 20/(m + n) in (1) as suggested in [26]. |