GNN-FiLM: Graph Neural Networks with Feature-wise Linear Modulation
Authors: Marc Brockschmidt
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Different GNN architectures are compared in extensive experiments on three tasks from the literature |
| Researcher Affiliation | Industry | Microsoft Research, Cambridge, UK. |
| Pseudocode | No | The paper includes Figure 2 which illustrates GNN computation, but it is not pseudocode or an algorithm block. |
| Open Source Code | Yes | All code for the implementation of these GNNs is released on https://github.com/Microsoft/tf-gnn-samples |
| Open Datasets | Yes | This article includes results on the PPI, QM9 and Var Misuse tasks. QM9 property prediction (Ramakrishnan et al., 2014); PPI (Zitnik & Leskovec, 2017); Var Misuse (Allamanis et al., 2018) |
| Dataset Splits | Yes | The 130k molecular graphs in the dataset were split into training, validation and test data by randomly selecting 10 000 graphs for the latter two sets. ... using the released split of the dataset, which contains 130k training graphs, 20k validation graphs and two test sets: SEENPROJTEST, ... and UNSEENPROJTEST |
| Hardware Specification | Yes | trained ten times with different random seeds on a NVidia V100. ... on compute nodes with NVidia P100 cards. |
| Software Dependencies | No | The paper mentions re-implementations in TensorFlow but does not provide specific version numbers for TensorFlow or any other software dependencies. |
| Experiment Setup | Yes | This resulted in three (R-GAT), four (GGNN, GNN-Fi LM, GNN-MLP1, R-GCN), or five (GNN-MLP0, R-GIN) layers (propagation steps) and a node representation size of 256 (GNN-MLP0, R-GIN) or 320 (all others). All models use dropout on the node representations before all GNN layers, with a keep ratio of 0.9. ... Furthermore, all models used residual connections connecting every second layer and GGNN, R-GCN, GNN-Fi LM and GNN-MLP0 additionally used layer normalisation. |