Feature-level Deeper Self-Attention Network for Sequential Recommendation
Authors: Tingting Zhang, Pengpeng Zhao, Yanchi Liu, Victor S. Sheng, Jiajie Xu, Deqing Wang, Guanfeng Liu, Xiaofang Zhou
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, comprehensive experimental results demonstrate that considering the transition relationships between features can significantly improve the performance of sequential recommendation. |
| Researcher Affiliation | Academia | 1Institute of AI, School of Computer Science and Technology, Soochow University, China 2Zhejiang Lab, China 3Rutgers University, New Jersey, USA 4University of Central Arkansas, Conway, USA 5School of Computer, Beihang University, Beijing, China 6Department of Computing, Macquarie University, Sydney, Australia 7The University of Queensland, Brisbane, Australia |
| Pseudocode | No | The paper describes the model architecture and provides mathematical equations but does not include a dedicated pseudocode block or algorithm. |
| Open Source Code | No | The paper does not provide any statement or link indicating that its source code is publicly available. |
| Open Datasets | Yes | We perform experiments on two publicly available datasets, i.e., Amazon 1 [Zhou et al., 2018] and Tmall 2 [Tang and Wang, 2018]. 1http://jmcauley.ucsd.edu/data/amazon/links.html 2https://tianchi.aliyun.com/competition |
| Dataset Splits | No | The paper describes training and testing processes, but does not explicitly detail the train/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility, beyond implying that sequences are prepared for training and the last item for testing. |
| Hardware Specification | No | The paper does not mention any specific hardware specifications (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers (e.g., programming languages, libraries, frameworks). |
| Experiment Setup | Yes | Without a special mention in this text, we fix the embedding size of all models to 100 and the batch size to 10. Also, the maximum sequence length n is set to 50 on the two datasets. |