Graph Neural Networks with Local Graph Parameters
Authors: Pablo Barceló, Floris Geerts, Juan Reutter, Maksimilian Ryschkov
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental evaluation shows that adding local graph parameters often has a positive effect on a variety of GNNs, datasets and graph learning tasks. |
| Researcher Affiliation | Academia | 1 Department of Computer Science, PUC, Chile 2 Millennium Institute for Foundational Research on Data, Chile 3 Department of Computer Science, University of Antwerp, Belgium [pbarcelo,jreutter]@ing.puc.cl, [floris.geerts,maksimilian.ryschkov]@uantwerpen.be |
| Pseudocode | No | The paper defines algorithms and processes using mathematical notation and descriptive text, but it does not include a clearly labeled pseudocode block or algorithm. |
| Open Source Code | Yes | Code to reproduce our experiments is available at https://github.com/Mr Ryschkov/LGP-GNN |
| Open Datasets | Yes | We select the best architectures from Dwivedi et al. [2020]: Graph Attention Networks (GAT) [Velickovic et al., 2018], Graph Convolutional Networks (GCN) [Kipf and Welling, 2017], Graph Sage [Hamilton et al., 2017], Gaussian Mixture Models (Mo Net) [Monti et al., 2017] and Gated GCN [Bresson and Laurent, 2017]. We leave out various linear architectures such as GIN [Xu et al., 2019] as they were shown to perform poorly on the benchmark. Learning tasks and datasets. As in Dwivedi et al. [2020] we consider (i) graph regression and the ZINC dataset [Irwin et al., 2012, Dwivedi et al., 2020]; (ii) vertex classification and the PATTERN and CLUSTER datasets [Dwivedi et al., 2020]; and (iii) link prediction and the COLLAB dataset [Hu et al., 2020]. |
| Dataset Splits | Yes | Graphs were divided between training/test as instructed by Dwivedi et al. [2020], and all numbers reported correspond to the test set. |
| Hardware Specification | Yes | All models for ZINC, PATTERN and COLLAB were trained on a Ge Force GTX 1080 Ti GPU, for CLUSTER a Tesla V100-SXM3-32GB GPU was used. |
| Software Dependencies | No | The paper mentions using DISC [Zhang et al., 2020] but does not provide specific version numbers for this or any other software libraries or dependencies used in the experiments. |
| Experiment Setup | Yes | Here we report results using 16 message-passing layers for ZINC, PATTERN, and CLUSTER, and 3 message-passing layers for COLLAB, as in Dwivedi et al. [2020]. In the supplementary material we report comparable results using only 4 layers for ZINC and PATTERN. We use the z-score of the logarithms of homomorphism counts to make them standard-normally distributed and comparable to other features. |