Collaborative Self-Attention Network for Session-based Recommendation

Authors: Anjing Luo, Pengpeng Zhao, Yanchi Liu, Fuzhen Zhuang, Deqing Wang, Jiajie Xu, Junhua Fang, Victor S. Sheng

IJCAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on two real-world datasets show that Co SAN constantly outperforms state-of-the-art methods.
Researcher Affiliation Academia 1Institute of AI, School of Computer Science and Technology, Soochow University, China 2Rutgers University, New Jersey, USA 3Key Lab of IIP of CAS, Institute of Computing Technology, Beijing, China 4The University of Chinese Academy of Sciences, Beijing, China 5School of Computer, Beihang University, Beijing, China 6Texas Tech University, Texas, USA
Pseudocode No The paper includes an architecture diagram (Figure 1) but no explicit pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links indicating that its source code is open or publicly available.
Open Datasets Yes We study the effectiveness of our proposed model Co SAN on two real-world datasets, i.e., Retailrocket1 and Yoochoose2. ... 1https://www.kaggle.com/retailrocket/ecommerce-dataset 2http://2015.recsyschallenge.com/challege.html
Dataset Splits Yes We take the sessions of the subsequent day on Yoochoose and the sessions of the subsequent week on Retailrocket for testing. ... Since Yoochoose is quite large, we sorted the training sequences by time and reported our results on more recent fractions 1/64 and 1/4 of the training sequences [Li et al., 2017]. ... Table 1: Statistics of the datasets. (includes 'train' and 'test' rows with counts)
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments.
Software Dependencies No The paper does not list specific software dependencies with version numbers.
Experiment Setup Yes Without a special mention, we set the number of self-attention heads h and self-attention layers r to 1 and 2 respectively. Also, the weighting parameter α is set to 0.5.