Matrix completion with Preference Ranking for Top-N Recommendation
Authors: Zengmao Wang, Yuhong Guo, Bo Du
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on E-commerce datasets. The experimental results show the proposed approach outperforms several state-of-the-art methods. |
| Researcher Affiliation | Academia | 1 School of Computer Science, Wuhan University, Wuhan 430072, China 2 School of Computer Science, Carleton University, Ottawa, Canada |
| Pseudocode | Yes | Algorithm 1 Matrix Completion with Preference Ranking |
| Open Source Code | No | The paper discusses using existing codebases for baseline methods (e.g., 'librec code for these two methods', 'released codes in the original papers' for MC and Neu MF), but does not state that the source code for its own proposed method (Pr MC) is released or available. |
| Open Datasets | Yes | We conducted experiments on six Amazon datasets: Automotive, Musical, Video, Office, Sports and Clothes. The datasets are downloaded from website1. 1http://jmcauley.ucsd.edu/data/amazon/ |
| Dataset Splits | Yes | On each dataset, we selected the latest transaction item for each user as the test set and used the remaining items as the training set. For efficient test evaluation, we randomly sampled 100 items without transaction records as part of the test set and produced the top-N rank list from only the test set, which is a common strategy used in [He et al., 2017; Koren, 2008; Elkahky et al., 2015]. We adopted the leave-one-out technique to evaluate the performances of different recommendation methods by following the setting of previous literature work [He et al., 2017]. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (CPU, GPU, memory, etc.) used for running the experiments. |
| Software Dependencies | No | The paper mentions using 'librec code' and 'released codes in the original papers' for baseline methods, but it does not specify version numbers for any software components or libraries used in its own implementation or experimentation. |
| Experiment Setup | Yes | We performed the following parameter selection for the trade-off parameters and algorithm running parameters. The latent dimension of BPR is selected from {50, 100, 200, 500}. For SLIM, we chose the l1 norm parameter from {1, 10, 100}, and the l2 norm parameter from {0.001, 0.01, 0.1}. For MC, the Lagrangian penalty parameter is selected from {0.01, 0.02, 0.03}, and the γ for updating the Lagrangian penalty is selected from {1.5, 2, 2.5, 3, 3.5,4}. For Neu MF, we used the most effective settings according to the original paper. For the proposed method, we chose the penalty parameter ρ from the same set as MC, and selected the weight parameter α from {0.001, 0.01, 0.1, 1, 3, 5}. We fixed γ as 1.5 and β as 5000. |