End-to-end Learnable Clustering for Intent Learning in Recommendation
Authors: Yue Liu, Shihao Zhu, Jun Xia, YINGWEI MA, Jian Ma, Xinwang Liu, Shengju Yu, Kejun Zhang, Wenliang Zhong
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Both experimental results and theoretical analyses demonstrate the superiority of ELCRec from six perspectives. This section aims to comprehensively evaluate ELCRec by answering research questions (RQs). |
| Researcher Affiliation | Collaboration | Yue Liu Ant Group National University of Singapore yueliu19990731@163.com Shihao Zhu Ant Group Hangzhou, China Jun Xia Westlake University Hangzhou, China Yingwei Ma Alibaba Group Hangzhou, China Jian Ma Ant Group Hangzhou, China Xinwang Liu National University of Defense Technology Changsha, China Shengju Yu National University of Defense Technology Changsha, China Kejun Zhang Zhejiang University Hangzhou, China Wenliang Zhong Ant Group Hangzhou, China |
| Pseudocode | Yes | We present the overall algorithm process of the proposed ELCRec method in Algorithm 1 in Appendix. |
| Open Source Code | Yes | The codes are available on Git Hub3. A collection (papers, codes, datasets) of deep group recommendation/intent learning methods is available on Git Hub4. 3https://github.com/yueliu1999/ELCRec |
| Open Datasets | Yes | We performed our experiments on four public benchmarks: Sports, Beauty, Toys, and Yelp5. The Sports, Beauty, and Toys datasets are subcategories of the Amazon Review Dataset [71]. The Sports dataset contains reviews for sporting goods, the Beauty dataset contains reviews for beauty products, and the Toys dataset contains toy reviews. On the other hand, the Yelp dataset focuses on business recommendations and is provided by Yelp company. The Sports, Beauty, and Toys datasets [71, 33] are obtained from: http://jmcauley.ucsd.edu/data/amazon/index.html. The Yelp dataset is obtained from https://www.yelp.com/dataset. For the movie recommendation, we conducted experiments on the Movie Lens 1M dataset (ML-1M) [29]. for news recommendation, we aim to conduct experiments on the MIND-small dataset [106]. |
| Dataset Splits | Yes | We adopted the dataset split settings used in the previous method [18]. following [18], we only kept datasets where all users and items have at least five interactions. Besides, we adopted the dataset split settings used in [18]. |
| Hardware Specification | Yes | Experimental results on the public benchmarks are obtained from the desktop computer with one NVIDIA Ge Force RTX 4090 GPU, six 13th Gen Intel(R) Core(TM) i9-13900F CPUs, and the Py Torch platform. |
| Software Dependencies | No | The paper mentions implementation using 'Py Torch platform' and 'Tensor Flow deep learning platform' but does not specify exact version numbers for these or any other software dependencies. |
| Experiment Setup | Yes | In the Transformer encoder, we employed self-attention blocks with two attention heads. The latent dimension, denoted as d, was set to 64, and the maximum sequence length, denoted as T, was set to 50. We utilized the Adam optimizer with a learning rate of 1e-3. The decay rate for the first moment estimate was set to 0.9, and the decay rate for the second moment estimate was set to 0.999. The cluster number, denoted as k, was set to 256 for the Yelp and Beauty datasets and 512 for the Sports and Toys datasets. The trade-off hyper-parameter, denoted as α, was set to 1 for the Sports and Toys datasets, 0.1 for the Yelp dataset, and 10 for the Beauty dataset. |