Do We Need Anisotropic Graph Neural Networks?

Authors: Shyam A. Tailor, Felix Opolka, Pietro Lio, Nicholas Donald Lane

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We primarily evaluate our approach on 5 datasets taken from recent works on GNN benchmarking. We use ZINC and CIFAR-10 Superpixels from Dwivedi et al. (2020) and Arxiv, Mol HIV and Code from Open Graph Benchmark (Hu et al., 2020). These datasets cover a wide range of domains, cover both transductive and inductive tasks, and are larger than datasets which are typically used in GNN works. We use evaluation metrics and splits specified by these papers.
Researcher Affiliation Collaboration Shyam A. Tailor 1 Felix L. Opolka 1,2 Pietro Li o 1 Nicholas D. Lane 1,3 1Department of Computer Science and Technology, University of Cambridge 2Invenia Labs, Cambridge, UK 3Samsung AI Center, Cambridge, UK
Pseudocode Yes Algorithm 1 Aggregator Fusion with aggregators A. This method is a modification of the Compressed Sparse Row (CSR) Sp MM algorithm, where we maximize re-use of matrix B. Maximizing re-use enables us to obtain significantly better accuracy with minimal impact on memory and latency. For simplicity, pseudocode assumes H = B = 1. This version demonstrates how we can remove memory overheads at inference time.
Open Source Code Yes Code and pretrained models for our experiments are provided at https://github.com/shyam196/egc.
Open Datasets Yes We use ZINC and CIFAR-10 Superpixels from Dwivedi et al. (2020) and Arxiv, Mol HIV and Code from Open Graph Benchmark (Hu et al., 2020).
Dataset Splits Yes We use evaluation metrics and splits specified by these papers.
Hardware Specification Yes For CPU measurements we used an Intel Xeon Gold 5218 and for GPU we used an Nvidia RTX 8000. The GPU models in our cluster were RTX 2080Ti and GTX 1080Ti. High-memory experiments were run on V100s in our cluster and an RTX 8000 virtual machine we had access to.
Software Dependencies No The paper mentions that EGC has been upstreamed to PyTorch Geometric, implying its use, but it does not specify exact version numbers for PyTorch Geometric or any other software dependencies.
Experiment Setup Yes In order to provide a fair comparison we standardize all parameter counts, architectures and optimizers in our experiments. All experiments were run using Adam (Kingma & Ba, 2014). For EGC-S, we use H = 8 and B = 4...For EGC-M, we use H = B = 4 for all experiments. We train models using batch size 128;