Greedy Learning of Generalized Low-Rank Models
Authors: Quanming Yao, James T. Kwok
IJCAI 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that it is much faster than the state-of-the-art, with comparable or even better prediction performance.In this section, we compare the proposed algorithms with the state-of-the-art on link prediction and robust matrix factorization. Experiments are performed on a PC with Intel i7 CPU and 32GB RAM. All the codes are in Matlab. |
| Researcher Affiliation | Academia | Quanming Yao James T. Kwok Department of Computer Science and Engineering Hong Kong University of Science and Technology Hong Kong {qyaoaa, jamesk}@cse.ust.hk |
| Pseudocode | Yes | Algorithm 1 R1MP [Wang et al., 2014].Algorithm 2 GLRL for low-rank matrix learning with smooth convex objective f.Algorithm 3 GLRL for low-rank matrix learning with nonsmooth objective f. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-source code of their proposed GLRL/EGLRL method. The only GitHub link provided is for a baseline method (AIS-Impute). |
| Open Datasets | Yes | Experiments are performed on the Epinions and Slashdot data sets2 [Chiang et al., 2014] (Table 1)... 2https://snap.stanford.edu/data/Experiments are performed on the Movie Lens data sets4 (Table 3)... 4http://grouplens.org/datasets/movielens/ |
| Dataset Splits | Yes | Following [Chiang et al., 2014], we use 10-fold cross-validation and fix the rank r to 40.50% of the ratings are randomly sampled for training while the rest for testing. |
| Hardware Specification | Yes | Experiments are performed on a PC with Intel i7 CPU and 32GB RAM. |
| Software Dependencies | No | All the codes are in Matlab. The paper does not provide specific version numbers for Matlab or any key libraries/solvers used. |
| Experiment Setup | Yes | As in [Wang et al., 2014], we fix the number of power method iterations to 30. Following [Chiang et al., 2014], we use 10-fold cross-validation and fix the rank r to 40.For AIS-Impute and Alt Min, they are stopped when the relative change in the objective is smaller than 10 4.we compare GLRL in Algorithm 3 (with = 0.99 and c2 = 0.05).The ranks used for the 100K, 1M, 10M data sets are 10, 10, and 20, respectively.Experiments are repeated five times with random training/testing splits. |