Topological Graph Neural Networks

Authors: Max Horn, Edward De Brouwer, Michael Moor, Yves Moreau, Bastian Rieck, Karsten Borgwardt

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We showcase the empirical performance of TOGL on a set of synthetic and real-world data sets, with a primary focus on assessing in which scenarios topology can enhance and improve learning on graph. Next to demonstrating improved predictive performance for synthetic and structurebased data sets (Section 5.2 and Section 5.3), we also compare TOGL with existing topology-based algorithms (Section 5.5).
Researcher Affiliation Academia Max Horn1, 2, Edward De Brouwer3, Michael Moor1, 2 Yves Moreau3 Bastian Rieck1, 2, 4, 5, Karsten Borgwardt1, 2, 1Department of Biosystems Science and Engineering, ETH Zurich, 4058 Basel, Switzerland 2SIB Swiss Institute of Bioinformatics, Switzerland 3ESAT-STADIUS, KU Leuven, 3001 Leuven, Belgium 4Institute of AI for Health, Helmholtz Munich, 85764 Neuherberg, Germany 5Technical University of Munich, 80333 Munich, Germany
Pseudocode No The paper describes the architecture and process of TOGL through text and diagrams (Figure 2), but it does not provide any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is released under a BSD-3-Clause License and can be accessed under https://github.com/Borgwardt Lab/TOGL.
Open Datasets Yes As for the data sets, we use data sets that are available in pytorch-geometric for graph learning tasks. Some of the benchmark data sets have been originally provided by Morris et al. (2020), others (CIFAR-10, CLUSTER, MNIST, PATTERN) have been provided by Dwivedi et al. (2020) in the context of a large-scale graph neural network benchmarking effort.
Dataset Splits Yes During training the loss on the validation split is monitored and the learning rate is halved if the validation loss does not improve over a period of lrpatience.
Hardware Specification Yes Most of the jobs were run on our internal cluster, comprising 64 physical cores (Intel(R) Xeon(R) CPU E5-2620 v4 @ 2.10GHz) with 8 Ge Force GTX 1080 GPUs. A smaller fraction has been run on another cluster, containing 40 physical cores (Intel(R) Xeon(R) CPU E5-2630L v4 @ 1.80GHz) with 2 Quadro GV100 GPUs and 1 Titan XP GPU.
Software Dependencies No Our method is implemented in Python, making heavy use of the pytorch-geometric library (Fey and Lenssen, 2019), licensed under the MIT License, and the pytorch-lightning library (Falcon et al., 2019), licensed under the Apache 2.0 License. The paper mentions software libraries but does not provide specific version numbers for them or the Python interpreter.
Experiment Setup Yes Following the setup of Dwivedi et al. (2020), we ran all experiments according to a consistent training setup and a limited parameter budget to encourage comparability between architectures. ... The parameters for the different data sets are shown in Table S5. ... Table S6 contains a listing of all hyperparameters used to train TOGL.