Pattern-enhanced Contrastive Policy Learning Network for Sequential Recommendation
Authors: Xiaohai Tong, Pengfei Wang, Chenliang Li, Long Xia, Shaozhang Niu
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on four public real-world datasets demonstrate the effectiveness of our approach for sequential recommendation. |
| Researcher Affiliation | Collaboration | 1Beijing Key Laboratory of Intelligent Telecommunication Software and Multimedia, Beijing University of Posts and Telecommunications, Beijing, China 2School of Computer Science, Beijing University of Posts and Telecommunications, Beijing, China 3Wuhan University, Wuhan, China 4Baidu Inc., Beijing, China |
| Pseudocode | No | The paper includes architectural diagrams (Figure 1) but no pseudocode or formally labeled algorithm blocks. |
| Open Source Code | Yes | The implementation of our model is available at https://github. com/heilsvastika/RAP |
| Open Datasets | Yes | We conduct experiments on four publicly available real-world datasets from three different domains. Beauty and Games are two different categories of Amazon dataset, Last FM is a music listening dataset released from Last.fm online music system and we take a subset which contains interactions from Jan 2015 to June 2015. Movie Lens is a popular benchmark dataset collected from the Movie Lens website. In this work, we adopt ML-1m which has one million user-movie interactions. (Footnotes also link to the datasets: 2http://jmcauley.ucsd.edu/data/amazon/, 3http://www.cp.jku.at/datasets/LFM-1b/, 4https://grouplens.org/datasets/movielens/1m/) |
| Dataset Splits | Yes | For partitioning, we split the historical sequence for each user u into three parts: (1) the most recent interaction for testing; (2) the nextto-last interactions for validation; and (3) all remaining interactions for training. |
| Hardware Specification | No | The paper mentions optimizing models with Adam optimizer and using implementations for baselines and PyTorch for the rest, but does not specify any hardware (e.g., CPU, GPU models, memory). |
| Software Dependencies | No | The paper mentions using Adam optimizer, PyTorch, and implementations for other models like NCF, GRU4Rec, Caser, SASRec, BERT4Rec, FPMC, and NARM (via RecBole), but does not specify version numbers for any of these software dependencies. |
| Experiment Setup | Yes | As to RAP, the hidden layer size of GRU and the embedding size are set to 100, the learning rate is set to 0.001, and the batch size is 128. The maximum length of sequential patterns extracted by SPADE is 5 (i.e., k = 5) when setting the support count to 2. |