A Graphical and Attentional Framework for Dual-Target Cross-Domain Recommendation
Authors: Feng Zhu, Yan Wang, Chaochao Chen, Guanfeng Liu, Xiaolin Zheng
IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on four real-world datasets demonstrate that GA-DTCDR significantly outperforms the state-of-the-art approaches. |
| Researcher Affiliation | Collaboration | 1 Department of Computing, Macquarie University, Sydney, NSW 2109, Australia 2 Ant Financial Services Group, Hangzhou 310012, China 3 College of Computer Science and Technology, Zhejiang University, Hangzhou 310027, China |
| Pseudocode | No | The paper does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the release of its source code. |
| Open Datasets | Yes | To validate the recommendation performance of our GA-DTCDR approach and baseline approaches, we choose four real-world datasets (see Table 2), i.e., three Douban subsets (Douban Book, Douban Music, and Douban Movie) [Zhu et al., 2019], and Movie Lens 20M [Harper and Konstan, 2016]. |
| Dataset Splits | No | The paper describes its test split and training strategy but does not explicitly mention a validation split or how validation was performed for hyperparameter tuning. |
| Hardware Specification | No | The paper does not specify any particular hardware components such as GPU or CPU models used for the experiments. |
| Software Dependencies | No | The paper mentions software like Doc2vec, Node2vec, Adam, and Stanford Core NLP, but it does not specify their version numbers. |
| Experiment Setup | Yes | For training our GA-DTCDR, we randomly select 7 negative instances for each observed positive instance into Y sampled, adopt Adam [Kingma and Ba, 2014] to train the neural network, and set the maximum number of training epochs to 50. The learning rate is 0.001, the regularization coefficient λ is 0.001, and the batch size is 1, 024. To answer Q3, the dimension k of the embedding varies in {8, 16, 32, 64, 128}. |