A Deep Framework for Cross-Domain and Cross-System Recommendations
Authors: Feng Zhu, Yan Wang, Chaochao Chen, Guanfeng Liu, Mehmet Orgun, Jia Wu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments conducted on three real-world datasets demonstrate that DCDCSR framework outperforms the state-of-the-art CDR and CSR approaches in terms of recommendation accuracy. |
| Researcher Affiliation | Collaboration | Feng Zhu1, Yan Wang1, Chaochao Chen2, Guanfeng Liu1, Mehmet Orgun1, Jia Wu1 1 Department of Computing, Macquarie University, Sydney, NSW 2109, Australia 2 AI Department, Ant Financial Services Group, Hangzhou 310012, China |
| Pseudocode | Yes | Algorithm 1 The DCDCSR Framework Require: The rating matrices, user sets, and item sets of the source and target domains or systems Rs, Rt, Us, Ut, Vs, and Vt. Ensure: Recommend items Vi Vt to a target user ui in the target domain or system. Phase 1: MF Modeling 1: Learn {U s, V s} from Rs by using matrix factorization; 2: Learn {U t, V t} from Rt by using matrix factorization. Phase 2: DNN Mapping 3: Generate the benchmark factor matrix U b for CDR or V b for CSR. 4: Normalize {U t, U b} for CDR or {V t, V b} for CSR. 5: Train the parameters of the deep neural network by the Feedforward and Backpropagation processes. 6: Obtain the affine factor matrices ˆ U t or ˆ V t. 7: Denormalize ˆ U t or ˆ V t. Phase 3: Cross-Domain Recommendation and Cross-System Recommendation 8: For CDR, fix ˆ U t and train ˆ V t from Rt by using the MF model in Phase 1. 9: For CSR, fix ˆ V t and train ˆ U t from Rt by using the MF model in Phase 1. 10: Obtain the predicted ratings ˆ Rt = ˆ U t[ ˆ V t] for the target domain or system. 11: return Vi. |
| Open Source Code | No | The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper. |
| Open Datasets | Yes | Datasets: In the experiments, we use three real-world datasets, namely two public benchmark datasets Netflix Prize1 and Movie Lens 20M2, and a Douban dataset crawled from the Douban website. 1https://www.kaggle.com/netflix-inc/netflix-prize-data 2https://www.kaggle.com/grouplens/movielens-20m-dataset |
| Dataset Splits | No | In our experiments, we split each dataset into a training set (80%) with the early ratings and a test set (20%) with the later ratings. |
| Hardware Specification | No | The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment. |
| Experiment Setup | Yes | Parameter Setting: We set the dimension K of the latent factor as 10, 20, 50, and 100, respectively. In order to generate the benchmark factors, we set k = 5 for top-k similar items or users. For the deep neural network, we set the depth of the hidden layers d to 5 because when d > 5, the performances of our methods almost do not change. We set the dimension of the input and output of the DNN to K, and the number of hidden nodes to 1.5 K. We randomly initialize the parameters as suggested in [Glorot and Bengio, 2010], i.e., W U[ 1 2K ]. In addition, we set the batch size to 32, and the learning rate to 0.005. |