Contrastive Graph Structure Learning via Information Bottleneck for Recommendation
Authors: Chunyu Wei, Jian Liang, Di Liu, Fei Wang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on public datasets are provided to show that our model significantly outperforms strong baselines. |
| Researcher Affiliation | Collaboration | Chunyu Wei1 , Jian Liang1 , Di Liu1, Fei Wang2 1Alibaba Group, China 2Department of Population Health Sciences, Weill Cornell Medicine, USA |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available on https://github.com/weicy15/CGI. |
| Open Datasets | Yes | Three public available datasets are employed in our experiments, i.e., Yelp2018, Movie Lens-1M and Douban. The detailed description can be found in the Appendix. |
| Dataset Splits | Yes | For each dataset, we randomly select 80% of the historical interactions of each user as the training set, 10% of those as the validation set, and the remaining 10% as the test set. |
| Hardware Specification | No | The paper states that hardware details are discussed in the Appendix, but the provided text does not include the Appendix, thus specific hardware specifications like GPU/CPU models or memory amounts are not present in the given text. |
| Software Dependencies | No | The paper mentions using 'Adam' for optimization but does not provide specific version numbers for any software libraries, programming languages, or other dependencies. |
| Experiment Setup | Yes | We initialize the latent vectors of both users and items with small random values for all models. The parameters for baseline methods are initialized as in the original papers, and are then carefully tuned to achieve optimal performances. For a fair comparison, the dimensions of both the user and item embeddings are all fixed to 64. We use Adam with β1 = 0.9, β2 = 0.999, ϵ = 1e 8 to optimize all these methods. The batch size is set to 2048. The learning rate is set as 0.005 and decayed at the rate of 0.9 every five epochs. We set λ = 0.02 and β = 0.01 for the coefficients in Eq. 15. More details about hyper-parameter settings of baselines can be found in the Appendix. |