SeeDRec: Sememe-based Diffusion for Sequential Recommendation

Authors: Haokai Ma, Ruobing Xie, Lei Meng, Yimeng Yang, Xingwu Sun, Zhanhui Kang

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on nine SR datasets and four cross-domain SR datasets verify its effectiveness and universality. The code is available in https://github.com/hulkima/See DRec. We conduct extensive experiments on nine SR datasets and four CDSR datasets to answer the following questions: RQ1: Does the proposed See DRec outperform the base SR models and the SOTA DM-based SR methods? RQ2: How does See DRec perform in datasets where explicit item taxonomies (e.g., categories) are absent? RQ3: How does each component proposed in See DRec impact the recommendation performance? RQ4: Is our See DRec effective and universal enough with different base SR models and cross-domain SR tasks? RQ5: How does See DRec function in the interest distribution transfer and the few-shot scenarios?
Researcher Affiliation Collaboration Haokai Ma 1 , Ruobing Xie 3 , Lei Meng 1,2 , Yimeng Yang 1 , Xingwu Sun 3 , Zhanhui Kang 3 1 School of Software, Shandong University, China 2 Shandong Research Institute of Industrial Technology, China 3 Tencent, China mahaokai@mail.sdu.edu.cn, ruobingxie@tencent.com, lmeng@sdu.edu.cn, y yimeng@mail.sdu.edu.cn, sunxingwu01@gmail.com, kegokang@tencent.com
Pseudocode No The paper includes figures illustrating the model structure but does not contain a formal pseudocode or algorithm block.
Open Source Code Yes The code is available in https://github.com/hulkima/See DRec.
Open Datasets Yes We construct five SR datasets from two platforms (i.e., Amazon [Lin et al., 2022] and Pixel Rec [Cheng et al., 2023]) with existing categories viewed as sememes.
Dataset Splits No The paper describes the prediction task (predicting target item ip+1) which implies a sequential split. However, it does not provide specific percentages or counts for training, validation, and test splits, nor does it reference predefined standard splits by name.
Hardware Specification Yes All reported results are the average values of five runs with different seeds on the same NVIDIA Tesla V100.
Software Dependencies No The paper mentions using the NLTK library but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The learning rate is tuned from 0.001 to 0.05. The batch size and the maximum sequence length are defined as 512 and 200 for fair comparisons. It is imperative to underscore that See DRec essentially possesses very few parameters (e.g., k in IPE). We just assign k = 3 via our empirical knowledge. For S2IDM, we define ωmin = 0.1, ωmax = 0.5 and the step T as 10 for all datasets. We use the early-stop strategy to avoid overfitting.