Session-Based Recommendation with Graph Neural Networks

Authors: Shu Wu, Yuyuan Tang, Yanqiao Zhu, Liang Wang, Xing Xie, Tieniu Tan346-353

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on two real datasets show that SR-GNN evidently outperforms the state-of-the-art session-based recommendation methods consistently. 4 Experiments and Analysis In this section, we first describe the datasets, compared methods, and evaluation metrics used in the experiments. Then, we compare the proposed SR-GNN with other comparative methods. Finally, we make detailed analysis of SRGNN under different experimental settings.
Researcher Affiliation Collaboration Shu Wu,1,2 Yuyuan Tang,3 Yanqiao Zhu,4 Liang Wang,1,2 Xing Xie,5 Tieniu Tan1,2 1Center for Research on Intelligent Perception and Computing National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences 2University of Chinese Academy of Sciences 3School of Computer and Communication Engineering, University of Science and Technology Beijing 4School of Software Engineering, Tongji University 5Microsoft Research Asia shu.wu@nlpr.ia.ac.cn, tangyyuanr@gmail.com, sxkdz@tongji.edu.cn, wangliang@nlpr.ia.ac.cn, xing.xie@microsoft.com, tnt@nlpr.ia.ac.cn
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes To make our results fully reproducible, all source codes have been made public at https://github.com/CRIPAC-DIG/SR-GNN.
Open Datasets Yes We evaluate the proposed method on two real-world representative datasets, i.e. Yoochoose2 and Diginetica3. The Yoochoose dataset is obtained from the Rec Sys Challenge 2015, which contains a stream of user clicks on an e-commerce website within 6 months. The Diginetica dataset comes from CIKM Cup 2016, where only its transactional data is used. 2http://2015.recsyschallenge.com/challege.html 3http://cikm2016.cs.iupui.edu/cikm-cup
Dataset Splits Yes To be specific, we set the sessions of subsequent days as the test set for Yoochoose, and the sessions of subsequent weeks as the test set for Diginetiva. For example, for an input session s = [vs,1, vs,2, . . . , vs,n], we generate a series of sequences and labels ([vs,1], vs,2), ([vs,1, vs,2], vs,3), . . . , ([vs,1, vs,2, . . . , vs,n 1], vs,n), where [vs,1, vs,2, . . . , vs,n 1] is the generated sequence and vs,n denotes the next-clicked item, i.e. the label of the sequence. Following (Li et al. 2017a; Liu et al. 2018), we also use the most recent fractions 1/64 and 1/4 of the training sequences of Yoochoose. The statistics of datasets are summarized in Table 1. Besides, we select other hyper-parameters on a validation set which is a random 10% subset of the training set.
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models) used for running its experiments.
Software Dependencies No The paper does not specify software versions for any libraries, frameworks, or languages used (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes Following previous methods (Li et al. 2017a; Liu et al. 2018), we set the dimensionality of latent vectors d = 100 for both datasets. All parameters are initialized using a Gaussian distribution with a mean of 0 and a standard deviation of 0.1. The mini-batch Adam optimizer is exerted to optimize these parameters, where the initial learning rate is set to 0.001 and will decay by 0.1 after every 3 epochs. Moreover, the batch size and the L2 penalty is set to 100 and 10 5 respectively.