Intent-aware Recommendation via Disentangled Graph Contrastive Learning
Authors: Yuling Wang, Xiao Wang, Xiangzhou Huang, Yanhua Yu, Haoyang Li, Mengdi Zhang, Zirui Guo, Wei Wu
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted on three datasets, which demonstrates the effectiveness of our proposed IDCL. Futher analysis shows that the learned intent representations and behavior distributions are interpretable. |
| Researcher Affiliation | Collaboration | 1Beijing University of Posts and Telecommunications 2Beihang University 3Meituan 4Tsinghua University |
| Pseudocode | No | The paper describes the model architecture and components using equations and descriptive text, but it does not include any pseudocode or clearly labeled algorithm blocks. |
| Open Source Code | No | The paper mentions PyTorch as the implementation framework (Footnote 2: "We implement our model based on Pytorch.2 https://pytorch.org/") and Rec Bole for baselines (Footnote 3: "We implement all the baselines with the unified opensource of recommendation algorithms, i.e., Rec Bole 3 [Zhao et al., 2020].3 https://github.com/RUCAIBox/Rec Bole"). However, it does not provide an explicit statement or link to the source code for the IDCL methodology itself. |
| Open Datasets | Yes | We conduct our experiments on three real-world datasets. In detail, for two Movie Lens datasets with different scales (i.e., ML-100k, ML-1M) [Harper and Konstan, 2015] |
| Dataset Splits | Yes | We split all users into training/validation/test sets as Multi VAE |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU models, CPU types, memory) used for running the experiments. It only mentions the software framework. |
| Software Dependencies | No | The paper states: "We implement our model based on Pytorch." However, it does not specify the version number of PyTorch or any other software dependencies required to replicate the experiments. |
| Experiment Setup | Yes | We turn the hyper-parameters in validation set using random search, and the search space of some important hyperparameters are: K {6, 8, 10, 12, 14, 16}, d [20, 40]. ... The Adam optimizerfor mini-batch gradient descent is applied to train all models. |