Generate What You Prefer: Reshaping Sequential Recommendation via Guided Diffusion

Authors: Zhengyi Yang, Jiancan Wu, Zhicai Wang, Xiang Wang, Yancheng Yuan, Xiangnan He

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the effectiveness of Dream Rec through extensive experiments and comparisons with existing methods.
Researcher Affiliation Academia University of Science and Technology of China The Hong Kong Polytechnic University
Pseudocode Yes Algorithm 1 Training phase of Dream Rec Algorithm 2 Generation phase of Dream Rec
Open Source Code Yes Codes and data are opensourced at https://github.com/YangZhengyi98/DreamRec.
Open Datasets Yes We use three datasets from real-world sequential recommendation scenarios: Yoo Choose, Kuai Rec, and Zhihu (the statistics of datasets are illustrated in Appendix B): Yoo Choose dataset comes from Rec Sys Challenge 2015 4. Kuai Rec [42] dataset is collected from the recommendation logs of a video-sharing mobile app. Zhihu [43] dataset is collected from a socialized knowledge-sharing community.
Dataset Splits Yes For all datasets, we first sort all sequences in chronological order, and then split the data into training, validation and testing data at the ratio of 8:1:1.
Hardware Specification Yes We implement all models with Python 3.7 and Py Torch 1.12.1 in Nvidia GeForce RTX 3090.
Software Dependencies Yes We implement all models with Python 3.7 and Py Torch 1.12.1 in Nvidia GeForce RTX 3090.
Experiment Setup Yes The embedding dimension of items is fixed as 64 across all models. The learning rate is tuned in the range of [0.01, 0.005, 0.001, 0.0005, 0.0001, 0.00005]. For our Dream Rec, we fix the unconditional training probability pu as 0.1 suggested by [36]. We search the total diffusion step T in the range of [50, 100, 200, 500, 1000, 2000], and the personalized guidance strength w in the range of [0, 2, 4, 6, 8, 10].