Enhancing Sequential Recommendation with Graph Contrastive Learning

Authors: Yixin Zhang, Yong Liu, Yonghui Xu, Hao Xiong, Chenyi Lei, Wei He, Lizhen Cui, Chunyan Miao

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real-world datasets demonstrate that GCL4SR consistently outperforms state-of-the-art sequential recommendation methods. 5 Experiments In this section, we perform extensive experiments to evaluate the performance of the proposed GCL4SR method.
Researcher Affiliation Collaboration 1School of Software, Shandong University, China 2Joint SDU-NTU Centre for Artificial Intelligence Research (C-FAIR), Shandong University, China 3Alibaba-NTU Singapore JRI & LILY Research Centre, Nanyang Technological University, Singapore 4School of Computer Science and Engineering, Nanyang Technological University, Singapore 5Alibaba Group, China
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not include any statement or link indicating that the source code for the proposed method is publicly available.
Open Datasets Yes The experiments are conducted on the Amazon review dataset [He and Mc Auley, 2016] and Goodreads review dataset [Wan et al., 2019]. For each user, the last interaction item in her interaction sequence is used as testing data, and the second last item is used as validation data. The remaining items are used as training data.
Dataset Splits Yes For each user, the last interaction item in her interaction sequence is used as testing data, and the second last item is used as validation data. The remaining items are used as training data. We train the model with early stopping strategy based on the performance on validation data.
Hardware Specification No The paper mentions that
Software Dependencies No The paper states
Experiment Setup Yes For CGL4SR, we empirically set the number of self-attention blocks and attention heads to 2. The dimensionality of embeddings is set to 64. The weights for the two self-supervised losses λ1 and λ2 are chosen from {0.01, 0.05, 0.1, 0.3, 0.5, 0.7, 1.0}. We use Adam [Kingma and Ba, 2014] as the optimizer and set the learning rate, β1, and β2 to 0.001, 0.9, and 0.999 respectively. Step decay of the learning rate is also adopted. The batch size is chosen from {256, 512, 1024}. The L2 regularization coefficient is set to 5 × 10−5. We train the model with early stopping strategy based on the performance on validation data.