Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Preference Aware Dual Contrastive Learning for Item Cold-Start Recommendation
Authors: Wenbo Wang, Bingquan Liu, Lili Shan, Chengjie Sun, Ben Chen, Jian Guan
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments are conducted to demonstrate the effectiveness of the proposed method, and the results show the superiority of our method, as compared with the state-of-the-arts. |
| Researcher Affiliation | Collaboration | Wenbo Wang1, Bingquan Liu1*, Lili Shan1, Chengjie Sun1, Ben Chen2, Jian Guan3 1 Faculty of Computing, Harbin Institute of Technology, Harbin, 150001, China 2 Alibaba Group, Hangzhou, 310000, China 3 Group of Intelligent Signal Processing, College of Computer Science and Technology, Harbin Engineering University, Harbin, 150001, China |
| Pseudocode | No | The paper describes the method's steps in prose and through diagrams, but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or a link to the open-source code for the proposed PAD-CLRec method. The provided GitHub link is for a baseline method (CLCRec). |
| Open Datasets | Yes | We evaluate the proposed model on two real-world datasets including Amazon Rec dataset 1 and Amazon Fashion dataset 2. 1https://github.com/weiyinwei/CLCRec 2http://jmcauley.ucsd.edu/data/amazon/ |
| Dataset Splits | Yes | For the cold-start task, we randomly select 20% items as cold items. In which, 50% interactions of these cold items are randomly selected as the Cold validation set, with the remainder interactions as Cold test set. Whereas, the rest 80% items are used as warm items. These warm item s interactions are divided into three parts, with 80% as the training set, 10% as the Warm validation set and the rest 10% as the Warm test set. In addition, an extra All validation (test) set is built by combining the Warm and Cold validation (test) sets. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for conducting the experiments. |
| Software Dependencies | No | The paper mentions optimizers and initialization methods (e.g., Adam optimizer, Xavier algorithm) but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch/TensorFlow versions). |
| Experiment Setup | Yes | In our experiments, the Xavier algorithm (Glorot and Bengio 2010) is utilized for parameters initialization. Adam optimizer (Kingma and Ba 2014) is adopted for model optimization with the learning rate of 1e 2, and the batch size is set as 256 The dimension of the item and the user embedding are set to be 64. All the number of negative samples in the joint objective function is set as 512. Hyper-parameters, i.e., λ, η, α, and β, are empirically selected from (0, 1). |