Self-supervised Graph Neural Networks for Multi-behavior Recommendation

Authors: Shuyun Gu, Xiao Wang, Chuan Shi, Ding Xiao

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments, in comparison with state-of-the-arts, well demonstrate the effectiveness of S-MBRec, where the maximum improvement can reach to 20%. In this section, we will test the effectiveness of our proposed model on real-world datasets and compare it with other existing advanced models.
Researcher Affiliation Academia Shuyun Gu , Xiao Wang , Chuan Shi and Ding Xiao Beijing University of Posts and Telecommunications gsy793048702@163.com, xiaowang@bupt.edu.cn, shichuan@bupt.edu.cn, dxiao@bupt.edu.cn
Pseudocode No The paper describes the method using mathematical equations and textual descriptions, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code No The paper does not provide any concrete access information (e.g., repository link, explicit statement of code release) for the source code of the described methodology.
Open Datasets Yes We verify the effect of our model on three real-world datasets: Beibei [Xia et al., 2021b], Taobao [Xia et al., 2021b] and Yelp [Xia et al., 2021a]. 2https://tianchi.aliyun.com/dataset/dataDetail?dataId=649 3https://www.yelp.com/dataset/download
Dataset Splits Yes In the above three datasets, there are at least five associations of target behavior, in which we randomly choose two associations, one is test data and the other is verification data. The rest are used for training.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No Our S-MBRec model is implemented in Pytorch. (No version numbers are provided for Pytorch or any other software dependencies.)
Experiment Setup Yes Our S-MBRec model is implemented in Pytorch. The model is optimized by the Adam optimizer with learning rate of 1e-4. The training batch-size is selected from {1024, 2048, 4096, 6114}. The embedding dim is searched from {64, 128, 256, 512}. The task weight parameter λ is searched from {0.05, 0.1, 0.2, 0.5, 1.0}, and L2 regularization coefficient is selected in ranges of {0.05, 0.1, 0.2, 0.5, 1.0}. The temperature coefficient τ is searched in {0.1, 0.2, 0.5, 1.0}.