Dynamic Item Block and Prediction Enhancing Block for Sequential Recommendation
Authors: Guibing Guo, Shichang Ouyang, Xiaodong He, Fajie Yuan, Xiaohua Liu
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct a series of experiments on four real datasets, and show that even a basic model can be greatly enhanced with the involvement of DIB and PEB in terms of ranking accuracy. |
| Researcher Affiliation | Collaboration | 1Northeastern University, China 2JD AI Research, Beijing, China 3Tencent, Shenzhen, China |
| Pseudocode | No | The paper describes methods using natural language and diagrams, but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code and datasets can be obtained from https://github.com/ouououououou/DIB-PEBSequential-RS |
| Open Datasets | Yes | We conduct our experiments on four real-word datasets, including three Amazon datasets1 [He and Mc Auley, 2016b; Mc Auley et al., 2015] and Movie Lens-100K2. 1http://jmcauley.ucsd.edu/data/amazon/ 2https://grouplens.org/datasets/movielens/100k/ |
| Dataset Splits | Yes | For each user, we preserve the last two interactions to validation and testing sets, while the rest interactions are used for training. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for experiments. |
| Software Dependencies | No | The paper mentions 'Tensor Flow' and 'Adam optimizer' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | For each method, the grid search is applied to find the optimal settings of hyperparameters using the validation set. These include embedding dimensions d from {16, 32, 50, 100, 150} and the learning rate from {0.001, 0.002, 0.005, 0.1, 0.2, 1}. For RUMI, Caser, MN-DIB, GRU-DIB and GRU4Rec, the sequence length L is from {3, 5, 10, 15, 20}. For MN-DIB and GRU-DIB, the window size of latest similar users is chosen from {3, 5, 10, 15}. To compare each loss function fairly, the sampling number of BPR, TOP1, NCE and PEB is set to 25. |