Graph Filtration Learning

Authors: Christoph Hofer, Florian Graf, Bastian Rieck, Marc Niethammer, Roland Kwitt

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we show that this type of readout operation compares favorably to previous techniques, especially when the graph connectivity structure is informative for the learning problem.
Researcher Affiliation Academia 1Department of Computer Science, University of Salzburg, Austria 2Department of Biosystems Science and Engineering, ETH Zurich, Switzerland 3UNC Chapel Hill
Pseudocode No The paper describes the process and method in prose and through diagrams (Figure 1), but it does not include a dedicated pseudocode block or algorithm listing.
Open Source Code Yes Source code is publicly available at https://github.com/c-hofer/graph_filtration_learning.
Open Datasets Yes We use two common benchmark datasets for graphs with discrete node attributes, i.e., PROTEINS and NCI1, as well as four social network datasets (IMDB-BINARY, IMDB-MULTI, REDDIT-BINARY, REDDIT-5k) which do not contain any node attributes (see supplementary material).
Dataset Splits Yes For evaluation, we follow previous work (see, e.g., Morris et al., 2019; Zhang et al., 2018a) and report cross-validation accuracy, averaged over ten folds, of the model obtained in the final training epoch.
Hardware Specification No The paper mentions running computations on a "parallel GPU variant" in section 5.2, but it does not specify the model or type of GPU, or any other specific hardware components like CPU or memory.
Software Dependencies No The paper states that the implementation uses "PyTorch", but it does not specify a version number for this or any other software dependency.
Experiment Setup Yes We train for 100 epochs using ADAM with an initial learning rate of 0.01 (halved every 20th epoch) and a weight decay of 10^-6. No hyperparameter tuning or early stopping is used.