Robustness of Graph Neural Networks at Scale

Authors: Simon Geisler, Tobias Schmidt, Hakan Şirin, Daniel Zügner, Aleksandar Bojchevski, Stephan Günnemann

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our attacks and defense with standard GNNs on graphs more than 100 times larger compared to previous work.
Researcher Affiliation Academia Department of Informatics Technical University of Munich {geisler, schmidtt, sirin, zuegnerd, bojchevs, guennemann}@in.tum.de
Pseudocode Yes Algorithm 1 Projected Randomized Block Coordinate Descent (PR-BCD)
Open Source Code Yes For supplementary material including the code and configuration see https://www.in.tum.de/daml/robustness-of-gnns-at-scale.
Open Datasets Yes Cora ML [2] 2.8 k 35.88 MB 168.32 k B Citeseer [28] 3.3 k 43.88 MB 94.30 k B Pub Med [33] 19.7 k 1.56 GB 1.77 MB ar Xiv [21] 169.3 k 114.71 GB 23.32 MB Products [21] 2.4 M 23.99 TB 2.47 GB Papers 100M [21] 111.1 M 49.34 PB 32.31 GB
Dataset Splits Yes For the OGB datasets we use the public splits and otherwise sample 20 nodes per class for training/validation. We typically report the average over three random seeds/splits and the 3-sigma error of the mean.
Hardware Specification Yes We only use a 32GB Tesla V100 for the experiments on Products with a full-batch GNN, since a three-layer GCN requires roughly 30 GB already during training.
Software Dependencies No The paper does not explicitly list specific software dependencies with version numbers.
Experiment Setup Yes On ar Xiv (170 k nodes), we train for 500 epochs and run the global PR-BCD attack for 500 epochs. The whole training and attacking procedure requires less than 2 minutes.