Improving Cross-Domain Recommendation through Probabilistic Cluster-Level Latent Factor Model
Authors: Siting Ren, Sheng Gao, Jianxin Liao, Jun Guo
AAAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on several real world datasets demonstrate that our proposed model outperforms the state-of-the-art methods for the cross-domain recommendation task. |
| Researcher Affiliation | Academia | Siting Ren, Sheng Gao, Jianxin Liao and Jun Guo Beijing University of Posts and Telecommunications { rensiting, gaosheng, guojun } @bupt.edu.cn, liaojianxin@ebupt.com |
| Pseudocode | No | The paper describes mathematical models and algorithms but does not include pseudocode or a clearly labeled algorithm block. |
| Open Source Code | No | The paper does not provide any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | Yes | Some of the MAE (Mean Absolute Error) performances results on Book-Crossing 1 vs Each Movie2 datasets are shown in table 1. [...] 1http://www.informatik.uni-freiburg.de/~cziegler/BX/ 2http://www.cs.cmu.edu/~lebanon/IR-lab.htm |
| Dataset Splits | Yes | In each dataset, we randomly choose 500 users (300 for training, 200 for testing) with more than 16 ratings. We keep different sizes of observed ratings as the initialization of test users. Given 5 means 5 ratings of each test user are given for training to avoid cold-start problem and the remaining are used for evaluation. |
| Hardware Specification | No | The paper does not describe any specific hardware used to run the experiments, such as GPU/CPU models or other system specifications. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch versions). |
| Experiment Setup | Yes | We normalize the rating scales of each dataset from 1 to 5 for a fair comparison. In each dataset, we randomly choose 500 users (300 for training, 200 for testing) with more than 16 ratings. We keep different sizes of observed ratings as the initialization of test users. Given 5 means 5 ratings of each test user are given for training to avoid cold-start problem and the remaining are used for evaluation. |