Sequential Recommender System based on Hierarchical Attention Networks
Authors: Haochao Ying, Fuzhen Zhuang, Fuzheng Zhang, Yanchi Liu, Guandong Xu, Xing Xie, Hui Xiong, Jian Wu
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | The experimental study demonstrates the superiority of our method compared with other state-of-the-art ones. In this section, we conduct experiments to answer the following questions: 1) what s the performance of our model as compared to other state-of-the-art methods? 2) what s the influence of longand short-term preferences in our model? 3) how do the parameters affect model performance, such as the regularization parameters and the number of dimensions? |
| Researcher Affiliation | Collaboration | 1College of Computer Science and Technology, Zhejiang University, China 2Key Lab of IIP of CAS, Institute of Computing Technology, CAS Beijing, China 3Microsoft Research, China 4Advanced Analytics Institute, University of Technology, Australia 5Management Science & Information Systems, Rutgers University, USA |
| Pseudocode | Yes | Algorithm 1: SHAN Algorithm |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We perform experiments on two real-world datasets, Tmall [Hu et al., 2017] and Gowalla [Cho et al., 2011], to demonstrate the effectiveness of our model. |
| Dataset Splits | No | Similar to [Hu et al., 2017], we randomly select 20% of sessions in the last month for testing, and the rest are used for training. We also randomly hold out one item in each session as the next item to be predicted. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers). |
| Experiment Setup | Yes | Input: long-term item set L, short-term item set S, learning rate η, regularization λ, number of dimensions K. In the experiments, we set the size to 100 for the trade-off between computation cost and recommendation quality for both datasets. Table 3 shows the influence of different regularization values at Recall@20. |