Improving Sequential Recommendation Consistency with Self-Supervised Imitation

Authors: Xu Yuan, Hongshen Chen, Yonghao Song, Xiaofang Zhao, Zhuoye Ding

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on four real-world datasets show that SSI effectively outperforms the state-of-the-art sequential recommendation methods.
Researcher Affiliation Collaboration Xu Yuan1,2,3 , Hongshen Chen3 , Yonghao Song1 , Xiaofang Zhao1 and Zhuoye Ding3 , 1Institute of Computing Technology, Chinese Academy of Sciences, Beijing, China 2University of Chinese Academy of Sciences, Beijing, China 3JD.com, China
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not include any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper mentions using 'real-world datasets from Amazon review datasets' and selects four subcategories, but it does not provide a specific URL, DOI, or a formal citation to the exact dataset version used or instructions on how to access it.
Dataset Splits Yes We hold out the last two interactions as validation and test sets for each user, while the other interactions are used for training.
Hardware Specification No The paper does not specify any particular hardware components such as GPU models, CPU types, or memory details used for running the experiments.
Software Dependencies No The paper mentions 'Py Torch' and 'Adam optimizer' but does not provide specific version numbers for these or other software dependencies required to replicate the experiment.
Experiment Setup Yes The hyper-parameters are set as λ1 = λ2 = λ3 = 1. We use the Adam optimizer[Kingma and Ba, 2015] with a learning rate of 0.001, where the batch size is set as 256 in the teacher and student model, respectively.