Incremental Matrix Factorization: A Linear Feature Transformation Perspective
Authors: Xunpeng Huang, Le Wu, Enhong Chen, Hengshu Zhu, Qi Liu, Yijun Wang
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experimental results on two real-world datasets clearly validate the effectiveness, efficiency and storage performance of the proposed framework. |
| Researcher Affiliation | Collaboration | Anhui Province Key Lab. of Big Data Analysis and Application, University of S&T of China School of Computer and Information, He Fei University of Technology Baidu Talent Intelligence Center hxpsola@mail.ustc.edu.cn, lewu@hfut.edu.cn, cheneh@ustc.edu.cn, zhuhengshu@baidu.com, qiliuql@ustc.edu.cn, wyjun@mail.ustc.edu.cn |
| Pseudocode | Yes | Algorithm 1 The Linear Feature Transformation Process. Algorithm 2 FAVA LFT Algorithm. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-sourcing the described methodology's code. |
| Open Datasets | Yes | The experiments were conducted on two real-world datasets Movie Lens1M (ML1M) and Movie Lens10M (ML10M), which are standard datasets in rating prediction. The data is collected by the Group Lens Research from the Movie Lens web site 1. [1http://movielens.org] |
| Dataset Splits | Yes | First, the experimental data sorted by timestamp was divided into T disjoint continuous parts with a similar scale. We call the i-th incremental data part i-ID. Second, 1-ID was used as the batch training set of incremental recommenders. After that, we applied i-ID to simulate the real-world rating batch, and evaluated each model by calculating the four metrics mentioned above. Note that, we only predicted the ratings of whom the user and item have appeared in former ID. Then, vector-retraining models (RMF R, IRMF, QRMF R and QIRMF) need to deal with the coming ratings one by one, and FAVA LFT can update the whole feature matrix with batch production (e.g., Algorithm 1). We repeated steps 3 and 4 from i = 2 until i = T |
| Hardware Specification | No | No specific hardware details (such as CPU/GPU models, memory, or cloud instance types) used for experiments were mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific software names with version numbers required to replicate the experiment. |
| Experiment Setup | Yes | Parameters Settings. The initial values of the feature matrices of RMF R, IRMF, QRMF R and QIRMF were randomly drawn from a normal distribution (N(0, 0.01)). The main parameters listed in Table 3 are: the regularization term coefficient λ, the step size η, the number of iteration in incremental training phase D, the rank of latent feature matrices K, the size of sampled matrix |Ω|, the threshold of the absolute difference of the objective function in FAVA LFT ε, and the ID number for each dataset T." and "Table 3: Parameters in the experiments |