LLM-ESR: Large Language Models Enhancement for Long-tailed Sequential Recommendation
Authors: Qidong Liu, Xian Wu, Yejing Wang, Zijian Zhang, Feng Tian, Yefeng Zheng, Xiangyu Zhao
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To verify the effectiveness and versatility of our proposed enhancement framework, we conduct extensive experiments on three real-world datasets using three popular SRS models. |
| Researcher Affiliation | Collaboration | 1 School of Auto. Science & Engineering, MOEKLINNS Lab, Xi an Jiaotong University 2 City University of Hong Kong 3 Jarvis Research Center, Tencent You Tu Lab, 4 Jilin University 5 School of Comp. Science & Technology, MOEKLINNS Lab, Xi an Jiaotong University 6 Medical Artificial Intelligence Lab, Westlake University |
| Pseudocode | Yes | Due to the limited space, the algorithm lies in Appendix A.2 for more clarity. Algorithm 1 Train and inference process of LLM-ESR |
| Open Source Code | Yes | The implementation code is available at https://github.com/Applied-Machine-Learning-Lab/LLM-ESR. |
| Open Datasets | Yes | There are three real-world datasets applied for evaluation, i.e., Yelp, Amazon Fashion and Amazon Beauty. ... Yelp3 is the dataset that records the check-in histories and corresponding reviews of users. ... Amazon4 [42] is a large e-commerce dataset |
| Dataset Splits | Yes | As for the data split, the last item vnu and the penultimate item vnu 1 of each interaction sequence are taken out as the test and validation, respectively. |
| Hardware Specification | Yes | The hardware used in all experiments is an Intel Xeon Gold 6133 platform with Tesla V100 32G GPUs |
| Software Dependencies | Yes | the basic software requirements are Python 3.9.5 and Py Torch 1.12.0. |
| Experiment Setup | Yes | The hyper-parameters N and α are searched from {2, 6, 10, 14, 18} and {1, 0.5, 0.1, 0.05, 0.01}. ... the batch size and learning rate are set as 128 and 0.001 for all datasets. The embedding size is 128 for all baselines, while 64 for LLM-ESR. ... Then, we choose the Adam as the optimizer. |