Recommendation with Multi-Source Heterogeneous Information
Authors: Li Gao, Hong Yang, Jia Wu, Chuan Zhou, Weixue Lu, Yue Hu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two real-world data sets demonstrate that CDNE can use network representation learning to boost the recommendation performance. We compare our method with state-of-the-art methods on two real-world data sets to evaluate the performance. Experimental results demonstrate that our method significantly outperforms the baseline methods in terms of the precision and MRR metrics. |
| Researcher Affiliation | Collaboration | Institute of Information Engineering, Chinese Academy of Sciences, Beijing, China Centre for Artificial Intelligence, University of Technology Sydney, Australia Department of Computing, Macquarie University, Sydney, Australia Data Science Lab, JD.com, Beijing, China School of Cyber Security, University of Chinese Academy of Sciences, Beijing, China |
| Pseudocode | No | The paper describes the generative process and parameter optimization steps in narrative text, but does not contain structured pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement about releasing source code for the described methodology, nor does it provide any links to a code repository. |
| Open Datasets | Yes | We use two real-world data sets [Wang et al., 2013] extracted from Cite ULike2 for experimental analysis. The detailed information of the data can be found at http://www.citeulike.org/faq/data.adp. |
| Dataset Splits | Yes | For each user ui, 70% of the items (i.e., articles) are randomly sampled as the training data, and the remaining 30% are used for testing. We then randomly choose one record of each user from the training data set to construct the validation data. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running its experiments. |
| Software Dependencies | No | The paper mentions models like Deep Walk and Skip-gram, but does not provide specific ancillary software details with version numbers (e.g., library names like PyTorch, TensorFlow, or scikit-learn with their versions). |
| Experiment Setup | Yes | All compared methods use the same number of latent factors K, K = 200. For all neural network models, the window size c is set as c = 8. As a result, we set the hyperparameters as a = 1, b = 0.01, σs = 1, σt = 0.5, σu = 0.1, σv = 1. The learning rate α is set as α = 0.01. |