Multi-Order Attentive Ranking Model for Sequential Recommendation
Authors: Lu Yu, Chuxu Zhang, Shangsong Liang, Xiangliang Zhang5709-5716
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Thorough experiments are conducted to show the features of MARank under various component settings. Furthermore experimental results on several public datasets show that MARank significantly outperforms the state-of-the-art baselines on different evaluation metrics. Experimental Evaluation In this section, we present our experimental results, with comparison to different kinds of baseline methods to answer the following research questions: |
| Researcher Affiliation | Academia | Lu Yu,1 Chuxu Zhang,2 Shangsong Liang,1,3 Xiangliang Zhang1 1King Abdullah University of Science and Technology, Thuwal, 23955, SA 2University of Notre Dame, IN 46556, USA 3School of Data and Computer Science, Sun Yat-sen University, China |
| Pseudocode | No | The paper describes the model architecture and mathematical formulations but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code can be found at https://github.com/voladorlu/MARank. |
| Open Datasets | Yes | To validate the proposed method MARank for top-N sequential recommendation, we use three publicly available datasets, Yelp 1 and Amazon 2, with statistics information in Table 1 after pre-processing. Amazon data (He, Kang, and Mc Auley 2017) contains product purchase history ranging from May 1996 July 2014. 1https://www.yelp.com/dataset/challenge 2https://www.amazon.com/ |
| Dataset Splits | No | The paper states it uses the last 20% of each user's feedbacks as a test set and the remaining data for training. It does not explicitly mention a separate validation set split or how parameter tuning was performed in relation to a validation set. "Without special description, we use last 20% of each user s feedbacks as test set." |
| Hardware Specification | Yes | The model is trained in Tensorflow on a Ge Force GTX 1080Ti GPU. |
| Software Dependencies | No | The paper mentions 'TensorFlow' as the framework used, but does not provide any specific version numbers for TensorFlow or any other software libraries or dependencies. |
| Experiment Setup | Yes | We use grid search to select the best parameter setting for different algorithms. These include embedding size d from {8, 16, 32, 64}, regularization hyper-parameter λ from {0.05, 0.01, 0.005, 0.001, 0.0005, 0.0001}. Since Adam is employed to optimize all the algorithms except Tran Rec, we use Adam s suggested default learning ratio i.e., 0.001. Learning ratio for Tran Rec is selected from {0.1, 0.05, 0.01, 0.005}. Without special mention, we fix the length N of ranked list as 20. The dropout ratio and gradient clipping norm is set to 0.5 and 5, respectively. |