DARec: Deep Domain Adaptation for Cross-Domain Recommendation via Transferring Rating Patterns
Authors: Feng Yuan, Lina Yao, Boualem Benatallah
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We empirically demonstrate on public datasets that our method achieves the best performance among several state-of-the-art alternative cross-domain recommendation models. In this section, we systematically evaluate the DARec model on multiple subsets extracted from the Amazon dataset with shared users in different categories. |
| Researcher Affiliation | Academia | Feng Yuan , Lina Yao and Boualem Benatallah University of New South Wales feng.yuan@student.unsw.edu.au, lina.yao@unsw.edu.au, b.benatallah@unsw.edu.au |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides links and mentions open-source implementations for baseline models (e.g., Surprise, CFRBM, CF-NADE, CMF, libfm) but does not state that the code for DARec, the method proposed in this paper, is open-source or provide a link to it. |
| Open Datasets | Yes | We use the public dataset collected by J. Mc Auley [He and Mc Auley, 2016] and define different item categories as domains, where we select users with at least 5 ratings. |
| Dataset Splits | Yes | In the embedding training stage, we leave out 10% of the data as validation set to tune the hyperparameters. To process the dataset, we randomly leave out 10%(20%) of the data for testing and 90%(80%) for training. 10% of the training set is used as validation set for hyperparamater tuning. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU or CPU models, memory specifications, or types of computing resources used for running the experiments. |
| Software Dependencies | No | The paper mentions "tensorflow version" for some baselines and "python version" for CMF, but it does not provide specific version numbers for TensorFlow, Python, or any other critical software dependencies required to replicate the experiment. |
| Experiment Setup | Yes | In the embedding training stage, we leave out 10% of the data as validation set to tune the hyperparameters where we adjust the number of hidden neurons from 100 to 1500 and regularizer coefficient from 0.1 to 0.00001. For the rating pattern extractor, we use only one layer with hidden neurons varying from 50 to 500. In the rating predcitor, we apply 3 layers for each domain and 2 layers for the domain classifier. The parameter β is changed from 0.0001 to 1, and µ, λ are varied from 0.0001 to 10, 000. To train DARec, we apply mini-batch Adam algorithm. We initialize W1, W2 with normal distribution (zero mean, 0.01 standard deviation), b1, b2 with zeros. We initialize the weights in DANN with normal distribution (zero mean, 0.01 standard deviation), and biases with zeros. |