Inductive Matrix Completion Based on Graph Neural Networks
Authors: Muhan Zhang, Yixin Chen
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We compare IGMC with state-of-the-art matrix completion algorithms on five benchmark datasets. Without using any content, IGMC achieves the smallest RMSEs on four of them, even beating many transductive baselines augmented by side information. |
| Researcher Affiliation | Collaboration | Muhan Zhang* Washington University in St. Louis muhan@wustl.edu *Now at Facebook Yixin Chen Washington University in St. Louis chen@wustl.edu |
| Pseudocode | Yes | Algorithm 1 ENCLOSING SUBGRAPH EXTRACTION |
| Open Source Code | Yes | Our code is publicly available at https://github.com/muhanzhang/IGMC. |
| Open Datasets | Yes | We conduct experiments on five common matrix completion datasets: Flixster (Jamali & Ester, 2010), Douban (Ma et al., 2011), Yahoo Music (Dror et al., 2011), Movie Lens-100K and Movie Lens-1M (Miller et al., 2003). |
| Dataset Splits | Yes | For ML-100K, we train and evaluate on the canonical u1.base/u1.test train/test split. For ML-1M, we randomly split it into 90% and 10% train/test sets. For Flixster, Douban and Yahoo Music we use the preprocessed subsets and splits provided by (Monti et al., 2017). |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running the experiments, such as GPU or CPU models. |
| Software Dependencies | No | The paper states 'We implemented IGMC using pytorch geometric (Fey & Lenssen, 2019)' but does not provide specific version numbers for PyTorch Geometric or any other key software libraries. |
| Experiment Setup | Yes | The final architecture uses 4 R-GCN layers with 32, 32, 32, 32 hidden dimensions. Basis decomposition with 4 bases is used... The final MLP has 128 hidden units and a dropout rate of 0.5. We use 1-hop enclosing subgraphs... randomly drop out its adjacency matrix entries with a probability of 0.2... We set the λ in (7) to 0.001. We train our model using the Adam optimizer... with a batch size of 50 and an initial learning rate of 0.001, and multiply the learning rate by 0.1 every 20 epochs for ML-1M, and every 50 epochs for all other datasets. |