Preference-Adaptive Meta-Learning for Cold-Start Recommendation
Authors: Li Wang, Binbin Jin, Zhenya Huang, Hongke Zhao, Defu Lian, Qi Liu, Enhong Chen
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on two publicly available datasets. Experimental results validate the power of social relations and the effectiveness of PAML. |
| Researcher Affiliation | Academia | 1 Anhui Province Key Lab. of Big Data Analysis and Application, School of Data Science & School of Computer Science and Technology, University of Science and Technology of China 2 College of Management and Economics, Tianjin University {wl063, bb0725}@mail.ustc.edu.cn, {huangzhy, liandefu, qiliuql, cheneh}@ustc.edu.cn, {hongke}@tju.edu.cn |
| Pseudocode | Yes | Algorithm 1: Training Procedure of PAML |
| Open Source Code | No | The paper does not contain any statement or link indicating the release of open-source code for the described methodology. |
| Open Datasets | Yes | We conduct experiments on two real-world datasets: Bouban Book1 and Yelp2, which are from publicly accessible repositories. 1https://book.douban.com 2https://www.yelp.com/dataset |
| Dataset Splits | Yes | All tasks are split into meta-training tasks T tr and meta-testing tasks T te. Generally, T tr are used to train the model while T te are used to validate its performance. For each task in T tr and T te, its friend set and support set are used to adapt the prior knowledge to preference-specific knowledge and personalized knowledge. In addition, the query set in T tr also plays a role in updating the prior knowledge. |
| Hardware Specification | Yes | In this paper, our proposed PAML is implemented by Pytorch and trained on a Linux system (2.10GHz Intel Xeon Gold 6230 CPUs and a Tesla V100 GPU). |
| Software Dependencies | No | The paper states that PAML is 'implemented by Pytorch' but does not specify the version number of PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We set the dimension of the feature embeddings to 32 and batch size to 64. Two layers used for the prediction are with 64 nodes each. We set the local and global learning rate (i.e., α, β) to 0.001 and 0.001 for Bouban Book, 0.001 and 0.0005 for Yelp, respectively. For two datasets, the number of implicit friends is empirically fixed to 5 by default , and the number of local updates is fixed to 1 by default. |