Self-supervised Graph Disentangled Networks for Review-based Recommendation

Authors: Yuyang Ren, Haonan Zhang, Qi Li, Luoyi Fu, Xinbing Wang, Chenghu Zhou

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results over five benchmark datasets validate the superiority of SGDN over the state-of-the-art methods and the interpretability of learned intent factors.
Researcher Affiliation Academia 1Shanghai Jiao Tong University 2Institute of Geographical Sciences and Natural Resources Research, Chinese Academy of Sciences {renyuyang, zhanghaonan, liqilcn, yiluofu, xwang8}@sjtu.edu.cn, zhouchsjtu@gmail.com
Pseudocode No The paper describes the model and algorithms using equations and textual descriptions but does not include a formally labeled "Pseudocode" or "Algorithm" block.
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the described methodology.
Open Datasets Yes Following [Shuai et al., 2022], we evaluate SGDN on the Amazon review dataset [He and Mc Auley, 2016].
Dataset Splits Yes Each dataset is randomly split into training, validation, and test sets with a ratio of 8:1:1.
Hardware Specification No The paper does not specify the hardware used for running the experiments (e.g., specific GPU/CPU models, memory details).
Software Dependencies No The paper mentions software components like "Adam" for optimization and "BERT-Whitening" for encoding reviews, but it does not specify version numbers for these or other software dependencies.
Experiment Setup Yes The hyperparameters for the baseline models are tuned according to the original paper. It is notable that we reimplement DGCF by replacing the BPR loss [Rendle et al., 2012] with MSE loss to accommodate the rating prediction task. For SGDN, we use Adam to optimize the parameters with a learning rate of 0.01. The size of embeddings d for users/items and reviews is set as 64. We choose the number of message passing layers L from {1, 2, 3}, the number of latent factors from {2, 4, 8}, and the dropout ratio from {0.7, 0.8, 0.9}. The temperature hyperparameter τ is tuned from {0.2, 0.5, 1}. The hyperparameter λ is searched from {0.01, 0.05, 0.1, 0.5}.