Explainable Reasoning over Knowledge Graphs for Recommendation
Authors: Xiang Wang, Dingxian Wang, Canran Xu, Xiangnan He, Yixin Cao, Tat-Seng Chua5329-5336
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on two datasets about movie and music, demonstrating significant improvements over state-of-the-art solutions Collaborative Knowledge Base Embedding and Neural Factorization Machine. |
| Researcher Affiliation | Collaboration | Xiang Wang,1 Dingxian Wang,2 Canran Xu,2 Xiangnan He,3 Yixin Cao,1 Tat-Seng Chua1 1School of Computing, National University of Singapore, 2e Bay 3School of Information Science and Technology, University of Science and Technology of China |
| Pseudocode | No | The paper describes the model architecture and equations but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We verify our method on the data, and release the data and the codes to facilitate the community working on emerging field of KG-enhanced recommendation. |
| Open Datasets | Yes | For movie domain, we use the combination of Movie Lens-1M and IMDb datasets, named MI... For music domain, we use the benchmark dataset, KKBox... We verify our method on the data, and release the data and the codes to facilitate the community working on emerging field of KG-enhanced recommendation. |
| Dataset Splits | No | For each dataset, we holdout the 80% and 20% interaction history of each user randomly to construct the training and test sets. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers. |
| Experiment Setup | Yes | We optimize all models with Adaptive Moment Estimation (Adam) and apply a grid search to find out the best settings of hyperparameters. The learning rate is searched in {0.001, 0.002, 0.01, 0.02}, while the coefficient of L2 regularization is tuned amongst {10 5, 10 4, 10 3, 10 2}. Other hypermeters of our proposed model are empirically set as follows: the batch size is 256, the embedding size of relation and entity type is 32, the embedding size of entity value is 64, and the unit number of LSTM is 256. The dimensions of latent factors for MF, NFM, and CKE are empirically set to be 64. For FMG, we set the rank used to factorize meta-graph similarity matrices to be 10, and the factor size of the second-order weights as 10, as suggested by (Zhao et al. 2017). Moreover, the early stopping strategy is performed, i.e., premature stopping if hit@15 on the test data does not increase for five successive epochs. |