Learning Personalized Itemset Mapping for Cross-Domain Recommendation

Authors: Yinan Zhang, Yong Liu, Peng Han, Chunyan Miao, Lizhen Cui, Baoli Li, Haihong Tang

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We have performed extensive experiments on real datasets to demonstrate the effectiveness of the proposed model, comparing with existing single-domain and cross-domain recommendation methods.
Researcher Affiliation Collaboration Yinan Zhang1,2 , Yong Liu1,3 , Peng Han4,6 , Chunyan Miao2 , Lizhen Cui5 , Baoli Li6 and Haihong Tang6 1Alibaba-NTU Singapore Joint Research Institute 2School of Computer Science and Engineering, Nanyang Technological University 3Joint NTU-UBC Research Centre of Excellence in Active Living for the Elderly (LILY) 4King Abdullah University of Science and Technology 5School of Software & Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University 6Alibaba Group
Pseudocode Yes Algorithm 1 Training Algorithm of CGN Model
Open Source Code No The paper does not provide any explicit statements about making the source code open-source or providing a link to a code repository.
Open Datasets Yes The experiments are performed on the Amazonreview dataset [He and Mc Auley, 2016].
Dataset Splits Yes For each user, in each domain, we use her last data partition for testing, and the other partitions for model training. Additionally, the last partition in training data can be used as validation data for choosing hyper-parameters.
Hardware Specification No The paper does not provide specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory specifications).
Software Dependencies No The paper mentions software like 'BPRMF method' and 'Adam algorithm' but does not specify version numbers for these or other software dependencies.
Experiment Setup Yes The learning rate θ is set to 0.0001. In the experiments, we empirically set the Gaussian kernel width τ to 2, the dimensionality of embedding d to 10, the regularization parameter λ to 0.5, and the training batch size B to 64.