Multi-Component Graph Convolutional Collaborative Filtering
Authors: Xiao Wang, Ruijia Wang, Chuan Shi, Guojie Song, Qingyong Li6267-6274
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on three real datasets and a synthetic dataset not only show the significant performance gains of MCCF, but also well demonstrate the necessity of considering multiple components. |
| Researcher Affiliation | Academia | 1Beijing University of Posts and Telecommunications, 2Peking University, 3Beijing Jiaotong University |
| Pseudocode | No | The paper describes the model architecture and mathematical formulations, but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any statement or link indicating that the source code for the methodology is openly available. |
| Open Datasets | Yes | We conduct experiments on three real datasets: Movie Lens, Amazon and Yelp, which are publicly accessible and vary in terms of domain, size and sparsity. Movie Lens-100k: A widely adopted benchmark dataset in movie recommendation, which contains 100,000 ratings from 943 users to 1, 682 movies. Amazon: A widely used product recommendation dataset, which contains 65, 170 ratings from 1, 000 users to 1, 000 items. Yelp: A local business recommendation dataset, which contains 30, 838 ratings from 1, 286 users to 2, 614 items. |
| Dataset Splits | No | For each dataset, we randomly select 80% of historical ratings as training set, and treat the remaining as test set. A specific validation split is not explicitly mentioned. |
| Hardware Specification | No | The paper does not provide specific details about the hardware specifications (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using ReLU as the activation function and Adam as the optimizer, but it does not specify any version numbers for these or any other software dependencies, such as programming languages or libraries. |
| Experiment Setup | Yes | We vary the number of components K in range {1, 2, 3, 4} and the embedding dimension d in range {8, 16, 32, 64, 128}. For neural network, we empirically employ two layers for all the neural parts and the activation function as Re LU. We randomly initialize the model parameters with a Gaussian distribution N(0, 0.1), then use the Adam as the optimizer. The batch size and learning rate are searched in {64, 128, 256, 512} and {0.0005, 0.001, 0.002, 0.0025}, respectively. Meanwhile, the dropout is applied to our model except for multi-component extraction, and the dropout rate is tested in {0.1, 0.4, 0.5, 0.6}. The parameters for L0 regularization are set according to literature (Louizos, Welling, and Kingma 2017). |