Review-Enhanced Hierarchical Contrastive Learning for Recommendation

Authors: Ke Wang, Yanmin Zhu, Tianzi Zang, Chunyang Wang, Mengyuan Jing

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments verify the superiority of Re HCL compared with state-of-the-arts. Extensive experiments are conducted on three datasets to verify the superiority of Re HCL over strong baselines.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Shanghai Jiao Tong University, Shanghai, China 2Hangzhou Innovation Institute, Beihang University, Hangzhou, China 3Nanjing University of Aeronautics and Astronautics, Nanjing, China
Pseudocode No The paper describes its methods using prose and mathematical equations but does not provide any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any statement or link indicating the release of its source code.
Open Datasets Yes We evaluate our model on Amazon dataset (Mc Auley and Leskovec 2013)1, which contains ratings and user-generated reviews. 1http://jmcauley.ucsd.edu/data/amazon/
Dataset Splits Yes Following previous studies (Chen et al. 2018; Shuai et al. 2022), we randomly split the user item pairs of each dataset into 80% training set, 10% validation set, and 10% testing set.
Hardware Specification No The paper states 'Re HCL is implemented with Tensorflow' but does not specify any particular hardware (e.g., GPU model, CPU, memory) used for the experiments.
Software Dependencies No The paper mentions 'Re HCL is implemented with Tensorflow' and 'Adam optimizer' but does not provide specific version numbers for TensorFlow or any other software dependencies.
Experiment Setup Yes Re HCL is implemented with Tensorflow. We adopt Adam optimizer with an initial learning rate of 10 3. The layer number is 3 and the embedding size is 64. We used the L2 regularization and its weight β3 is set to 10 4.