RoboGNN: Robustifying Node Classification under Link Perturbation

Authors: Sheng Guan, Hanchao Ma, Yinghui Wu

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Using real-world benchmark graphs, we experimentally verify that Robo GNN can effectively robustify representative GNNs with guaranteed robustness, and desirable gains on accuracy. We next experimentally verify the effectiveness of Robo GNN on improving the robustness and accuracy of GNN-based classification, the learning cost, and the impact of parameters.
Researcher Affiliation Academia Sheng Guan , Hanchao Ma , Yinghui Wu Case Western Reserve University {sxg967,hxm382,yxw1650}@case.edu
Pseudocode Yes Algorithm 1 min Protect; Algorithm 2 Robo GNN
Open Source Code Yes The source code and datasets are available1. 1https://github.com/CWRU-DB-Group/robognn
Open Datasets Yes We use three real-world datasets: Cora [Mc Callum et al., 2000], Citeseer [Giles et al., 1998] and Pubmed [Sen et al., 2008].
Dataset Splits Yes Table 1: Settings: Datasets, training, and robustification... # Training Nodes 140 120 60 # Validation Nodes 500 500 500 # Test Nodes 1,000 1,000 1000
Hardware Specification Yes All Experiments are executed on a Unix environment with GPU Nvidia P-100.
Software Dependencies No The paper mentions software like GCN, GAT, and π-PPNP but does not specify exact version numbers for these or any other ancillary software components (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We train a two-layer network for all the input models with the same set of hyper-parameters settings (e.g., dropout rate, number of hidden units). The training epoch number is set as 300. For each dataset, we fix the learning rate for Pro-GNN, cert PPNP, and Robo GNN.