Randomized Message-Interception Smoothing: Gray-box Certificates for Graph Neural Networks

Authors: Yan Scholten, Jan Schuchardt, Simon Geisler, Aleksandar Bojchevski, Stephan Günnemann

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our certificates on node classification datasets and analyze the robustness of existing GNN architectures. We demonstrate the effectiveness of our method on various models and datasets.
Researcher Affiliation Academia 1Dept. of Computer Science & Munich Data Science Institute, Technical University of Munich 2CISPA Helmholtz Center for Information Security
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes 1Project page: https://www.cs.cit.tum.de/daml/interception-smoothing
Open Datasets Yes We train our models on citation datasets: Cora-ML (Bojchevski and Günnemann, 2018; Mc Callum et al., 2000) with 2,810 nodes, 7,981 edges and 7 classes; Citeseer (Sen et al., 2008) with 2,110 nodes, 3,668 edges and 6 classes; and Pub Med (Namata et al., 2012) with 19,717 nodes, 44,324 edges and 3 classes.
Dataset Splits Yes As labelled nodes, we draw 20 nodes per class for training and validation, and 10% of the nodes for testing.
Hardware Specification No No specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments were found.
Software Dependencies No The paper mentions software like PyTorch Geometric and various GNN architectures (GCN, GAT, SMA) but does not provide specific version numbers for any of these software components.
Experiment Setup Yes As labelled nodes, we draw 20 nodes per class for training and validation, and 10% of the nodes for testing. We use n0 = 1,000 samples for estimating the majority class, n1 = 3,000 samples for certification, and set α = 0.01.