Temporal Graph Contrastive Learning for Sequential Recommendation

Authors: Shengzhe Zhang, Liyi Chen, Chao Wang, Shuangli Li, Hui Xiong

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on several real-world datasets show the effectiveness of TGCL4SR against state-of-the-art baselines of sequential recommendation.
Researcher Affiliation Academia 1University of Science and Technology of China 2Guangzhou HKUST Fok Ying Tung Research Institute 3The Thrust of Artificial Intelligence, The Hong Kong University of Science and Technology (Guangzhou) 4The Department of Computer Science and Engineering, The Hong Kong University of Science and Technology
Pseudocode No The paper includes equations and describes algorithms in text, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code No The paper does not provide any statement about releasing the source code or a link to a code repository.
Open Datasets Yes Four public datasets are chosen for the evaluation of SR models from Amazon Review (He and Mc Auley 2016) and Goodreads (Wan et al. 2019). Amazon Review collects user reviews on products in various categories from Amazon. We choose the datasets of three categories Beauty , Video Games and CDs for experiments. In addition, Goodreads Review contains user reviews on books of various genres from Goodreads. We take the dataset of Comics Graphic for evaluation.
Dataset Splits Yes The ranking of predictions is computed on the full item set but not sampling. ... For the interaction sequence Su = {vu 1 , vu 2 , ..., vu |Su|} of each user u, we take the subsequence Su k 1 = {vu 1 , vu 2 , ..., vu k 1} and its corresponding target item vu k as training data at each time step k from 2 to |Su|. Then for all users, we adopt the cross-entropy loss function to optimize the model: ... We adopt the leave-one-out evaluation strategy.
Hardware Specification No The experiments are conducted on a server with fifteen v CPU AMD EPYC 7543 32-Core Processors and one NVIDIA A40 GPU. While it mentions a server and a GPU, it lacks specific details like GPU memory, CPU clock speed, or total RAM.
Software Dependencies No Our work is implemented by Pytorch. This mentions the software 'Pytorch' but does not specify a version number.
Experiment Setup Yes The sample parameters M and N are set as 2 and 20 respectively. We set the training batch size and all the embedding dimension sizes as 1024 and 64 respectively. The max length of user sequences is limited to 50. We set both the number of self-attention blocks and multi-heads for Ti TConv and the temporal sequence encoder as 2. The scaling constant a is searched in {50, 100, 200, 400} and c is set as 60000. Next, we set p as 0.5, and tune σ within [0.01, 1]. For TGCL, we tune τ and τ within [0.1, 1]. Last, we search λ1 within [0.25, 1.5] stepping by 0.25, while λ2 is selected from {0.05, 0.1, 0.2, 0.3, 0.5}.