Plug-In Diffusion Model for Sequential Recommendation
Authors: Haokai Ma, Ruobing Xie, Lei Meng, Xin Chen, Xu Zhang, Leyu Lin, Zhanhui Kang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments and analyses on four datasets have verified the superiority of the proposed PDRec over the state-of-the-art baselines and showcased the universality of PDRec as a flexible plugin for commonly-used sequential encoders in different recommendation scenarios. |
| Researcher Affiliation | Collaboration | Haokai Ma1, Ruobing Xie3, Lei Meng2,1*, Xin Chen3, Xu Zhang3, Leyu Lin3, Zhanhui Kang3 1 School of Software, Shandong University, China 2 Shandong Research Institute of Industrial Technology, China 3 Tencent, China |
| Pseudocode | No | The paper does not contain a clearly labeled pseudocode or algorithm block. |
| Open Source Code | Yes | The code is available in https://github.com/hulkima/PDRec. |
| Open Datasets | Yes | We conduct extensive experiments on four real-world datasets. We select Toys and Games and Video Games to form the Toy and Game dataset from Amazon (Lin et al. 2022). From Douban, we pick Books and Musics to form the Book and Music dataset (Wu et al. 2023). |
| Dataset Splits | No | The paper does not provide explicit training, validation, and test dataset splits (e.g., percentages or sample counts for each split). |
| Hardware Specification | No | The paper does not specify the hardware (e.g., specific GPU or CPU models) used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers. |
| Experiment Setup | Yes | For fair comparisons, we set the learning rate and the maximum sequence length as 5e 3 and 200. According to the natural distribution of behaviors, we set the ωm as 0.5 for the relatively sparse Amazon datasets and 0.8 for the denser Douban datasets. Similarly, we define the number of coarse-grained sorted items m, the number of fine-grained resorted items n, and the loss weight ωd of LD as 50, 5 and 0.3 for Amazon. For Douban, these parameter are configured as 100, 1, and 0.01, respectively. Due to the variations in TI-Diff Rec s confidence range, PDRec exhibits minor discrepancies in the parameters of HBR across diverse datasets. That is, the ranking weight ωr, the truncate value cw and the rescale weight ωf are denoted as 0.1, 3 and 2 for Toy, 0.1, 5 and 4 for Game, 0.3, 3 and 4 for Book and 0.1, 5 and 2 for Music. |