Weisfeiler and Leman Go Neural: Higher-Order Graph Neural Networks

Authors: Christopher Morris, Martin Ritzert, Matthias Fey, William L. Hamilton, Jan Eric Lenssen, Gaurav Rattan, Martin Grohe4602-4609

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experimental evaluation confirms our theoretical findings as well as confirms that higher-order information is useful in the task of graph classification and regression.
Researcher Affiliation Academia 1TU Dortmund University 2RWTH Aachen University 3Mc Gill University and MILA {christopher.morris, matthias.fey, janeric.lenssen}@tu-dortmund.de, {ritzert, rattan, grohe}@informatik.rwth-aachen.de, wlh@cs.mcgill.ca
Pseudocode No The paper describes algorithms using mathematical equations and textual explanations, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes The code was built upon the work of (Fey et al. 2018) and is provided at https: //github.com/chrsmrrs/k-gnn.
Open Datasets Yes To compare our k-GNN architectures to kernel approaches we use well-established benchmark datasets from the graph kernel literature (Kersting et al. 2016). ...To demonstrate that our architectures scale to larger datasets and offer benefits on real-world applications, we conduct experiments on the Q M9 dataset (Ramakrishnan et al. 2014; Ruddigkeit et al. 2012; Wu et al. 2018).
Dataset Splits Yes For the smaller datasets, which we use for comparison against the kernel methods, we performed a 10-fold cross validation where we randomly sampled 10% of each training fold to act as a validation set. For the Q M9 dataset, we follow the dataset splits described in (Wu et al. 2018). We randomly sampled 10% of the examples for validation, another 10% for testing, and used the remaining for training.
Hardware Specification No The paper mentions 'limited GPU memory' when discussing scalability and implies GPU usage for training but does not provide specific details on the hardware (e.g., GPU model, CPU, memory).
Software Dependencies No The paper mentions 'C-SVM implementation of LIBSVM (Chang and Lin 2011)' and 'Adam optimizer' and 'The code was built upon the work of (Fey et al. 2018)' but does not provide specific version numbers for software dependencies or frameworks.
Experiment Setup Yes We always used three layers for 1-GNN, and two layers for (local) 2-GNN and 3-GNN, all with a hidden-dimension size of 64. ... For the final classification and regression steps, we used a three layer MLP, with binary cross entropy and mean squared error for the optimization, respectively. For classification we used a dropout layer with p = 0.5 after the first layer of the MLP. ... Moreover, we used the Adam optimizer with an initial learning rate of 10 2 and applied an adaptive learning rate decay based on validation results to a minimum of 10 5. We trained the classification networks for 100 epochs and the regression networks for 200 epochs.