Size-Invariant Graph Representations for Graph Classification Extrapolations

Authors: Beatrice Bevilacqua, Yangze Zhou, Bruno Ribeiro

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we conclude with synthetic and real-world dataset experiments showcasing the benefits of representations that are invariant to train/test distribution shifts.
Researcher Affiliation Academia 1Department of Computer Science, and 2Department of Statistics, Purdue University, West Lafayette, Indiana, USA.
Pseudocode No The paper describes methods and equations, but does not contain a clearly labeled 'Pseudocode' or 'Algorithm' block.
Open Source Code Yes Our code is also available1. 1https://github.com/Purdue MINDS/ size-invariant-GNNs
Open Datasets Yes We use the f MRI brain graph data on 71 schizophrenic patients and 74 controls for classifying individuals with schizophrenia (De Domenico et al., 2016). We consider four vertex-attributed datasets (NCI1, NCI109, DD, PROTEINS) from Morris et al. (2020), and split the data as proposed by Yehudai et al. (2021).
Dataset Splits Yes For each task, we report (a) training accuracy (b) validation accuracy, which are new examples sampled from P(Y, Gtr Ntr); and (c) extrapolation test accuracy... We employ a 5-fold cross-validation for hyperparameter tuning.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using PyTorch Geometric and PyTorch, but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes The graph representations are then passed to a L-hidden layer feedforward neural network (MLP) with softmax outputs that give the predicted classes, L {0, 1}. For practical reasons, we focus only on densities of graphs of size exactly k, which is treated as a hyperparameter. We employ a 5-fold cross-validation for hyperparameter tuning.