Cross-Domain Recommendation: An Embedding and Mapping Approach
Authors: Tong Man, Huawei Shen, Xiaolong Jin, Xueqi Cheng
IJCAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on two cross-domain recommendation scenarios demonstrate that EMCDR significantly outperforms stateof-the-art cross-domain recommendation methods. |
| Researcher Affiliation | Academia | {CAS Key Laboratory of Network Data Science and Technology, Institute of Computing Technology, Chinese Academy of Sciences, China} 2{University of Chinese Academy of Sciences, China} |
| Pseudocode | Yes | Algorithm 1 The EMCDR framework. |
| Open Source Code | No | The paper does not provide any statements about open-sourcing code or links to a code repository. |
| Open Datasets | Yes | Two real-world datasets are adopted for evaluation. The first dataset, Movie Lens-Netflix... The second dataset was crawled from an online social network, i.e., Douban [Huang et al., 2012], where users give rating to books and movies |
| Dataset Splits | Yes | The learning rate and regularization coefficients are optimized via 5-fold cross validation on the training dataset. |
| Hardware Specification | No | The paper does not specify any hardware used for the experiments. |
| Software Dependencies | No | The paper mentions using a 'tan-sigmoid function' as activation and 'stochastic gradient descent' for optimization, but does not list any specific software libraries or their version numbers. |
| Experiment Setup | Yes | Dimension K of the latent factor is set as 20, 50, and 100. The learning rate and regularization coefficients are optimized via 5-fold cross validation on the training dataset. For the linear mapping function, the size of the permutation matrix is K K, and the regularization coefficient λM is chosen as 0.01; for the MLP mapping function, we choose the structure of the MLP as one-hidden layer, the dimension of the input and output of the MLP is set as K, whilst the number of nodes in the hidden layer is set as 2 K. The weight and bias parameters of the MLP is initialized according to the rule in [Glorot and Bengio, 2010]. We use minibatch with a size of 16 and no momentum is used. Finally, a tan-sigmoid function is employed as the activation function. |