Graph Heterogeneous Multi-Relational Recommendation
Authors: Chong Chen, Weizhi Ma, Min Zhang, Zhaowei Wang, Xiuqiang He, Chenyang Wang, Yiqun Liu, Shaoping Ma3958-3966
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on two public benchmarks show that GHCF significantly outperforms the state-of-the-art recommendation methods, especially for cold-start users who have few primary item interactions. Further analysis verifies the importance of the proposed embedding propagation for modelling high-hop heterogeneous user-item interactions, showing the rationality and effectiveness of GHCF. |
| Researcher Affiliation | Collaboration | Chong Chen 1, Weizhi Ma 1, Min Zhang 1 *, Zhaowei Wang 2, Xiuqiang He 2, Chenyang Wang 1, Yiqun Liu1 and Shaoping Ma 1 Department of Computer Science and Technology, Institute for Artificial Intelligence, Beijing National Research Center for Information Science and Technology, Tsinghua University 2 Huawei Noah s Ark Lab cc17@mails.tsinghua.edu.cn, z-m@tsinghua.edu.cn |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our implementation has been released (https://github.com/chenchongthu/GHCF). |
| Open Datasets | Yes | The two datasets are preprocessed to filter out users and items with less than 5 purchase interactions. After that, the last purchase records of users are used as test data, the second last records are used as validation data, and the remaining records are used for training. Note that for objective comparison, in our experiments the two datasets are exactly the same as those used in (Chen et al. 2020d) 3, in which the split datasets are publicly available. 2https://tianchi.aliyun.com/dataset/dataDetail?dataId=649 3https://github.com/chenchongthu/EHCF |
| Dataset Splits | Yes | After that, the last purchase records of users are used as test data, the second last records are used as validation data, and the remaining records are used for training. |
| Hardware Specification | Yes | All experiments are run on the same machine (Intel Xeon 8-Core CPU of 2.4 GHz and single NVIDIA Ge Force GTX TITAN X GPU) for fair comparison. |
| Software Dependencies | No | The paper mentions using Adam (Kingma and Ba 2014) as the optimizer and dropout (Srivastava et al. 2014) but does not provide specific version numbers for these or other software dependencies. |
| Experiment Setup | Yes | After the tuning process, the batch size is set to 256, the size of the latent factor dimension d is set to 64. The learning rate is set to 0.001. We set the negative sampling ratio as 4 for samplingbased methods... For non-sampling methods ENMF, EHCF and our GHCF, the negative weight is set to 0.01 for Beibei and 0.1 for Taobao. The number of graph layers is set to 4, and the dropout ratio was set to 0.8 for Beibei and Taobao to prevent overfitting. |