Graph Disentangled Contrastive Learning with Personalized Transfer for Cross-Domain Recommendation
Authors: Jing Liu, Lele Sun, Weizhi Nie, Peiguang Jing, Yuting Su
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on four real-world datasets demonstrate the superiority of GDCCDR over state-of-the-art methods. |
| Researcher Affiliation | Academia | Jing Liu, Lele Sun, Weizhi Nie, Peiguang Jing, Yuting Su* School of Electrical and Information Engineering, Tianjin University, China {jliu tju, sunlele, weizhinie, pgjing, ytsu}@tju.edu.cn |
| Pseudocode | No | The paper describes its methods using mathematical formulations and textual descriptions but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | We evaluate GDCCDR on the Amazon dataset1, specifically Sport&Phone, Sport&Cloth, Elec&Phone, and Elec&Cloth. 1http://jmcauley.ucsd.edu/data/amazon/index 2014.html |
| Dataset Splits | No | The paper mentions training and test sets and a leave-one-out strategy but does not explicitly provide percentages or counts for distinct training, validation, and test splits. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used to run the experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch implementation' but does not provide specific version numbers for software dependencies like PyTorch or Python. |
| Experiment Setup | Yes | The embedding dimension (d) is set to 128 for all methods, with a fixed learning rate of 0.001, a batch size of 1024, and a dropout rate of 0.5. The low-rank (k) is 10, the proximate temperature (τp) is 0.05, the L2 regularization coefficient (λl) is selected from {0.05, 0.005, 0.0005}. The final embeddings of GNN-based methods are obtained through mean pooling. For point-wise loss, we have four negative samples per positive sample. |