Boosting the Adversarial Robustness of Graph Neural Networks: An OOD Perspective

Authors: Kuan Li, YiWen Chen, Yang Liu, Jin Wang, Qing He, Minhao Cheng, Xiang Ao

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experiments over 25,000 perturbed graphs, our method could still maintain good robustness against both adaptive and non-adaptive attacks.
Researcher Affiliation Academia 1The Hong Kong University of Science and Technology 2Beihang University 3Institute of Computing Technology, Chinese Academy of Sciences 4UCLA 5 Penn State University
Pseudocode Yes Algorithm 1: GOOD-AT
Open Source Code Yes The code is provided at https://github.com/likuanppd/GOOD-AT
Open Datasets Yes on two widely used datasets, namely Cora (Bojchevski & Günnemann, 2017) and Citeseer (Giles et al., 1998).
Dataset Splits Yes The data split follows 10%/10%/80% (train/validation/testing).
Hardware Specification Yes In this study, all experiments were conducted on a computing cluster equipped with NVIDIA Tesla A100 GPUs. Each GPU has 80 GB of memory
Software Dependencies Yes The operating system used for the experiments is Ubuntu 20.04 LTS. The deep learning models were implemented using Py Torch framework (version 2.0.0) with Python (version 3.8.8) as the programming language. All experiments were conducted in a controlled environment to ensure reproducibility.
Experiment Setup Yes For GOOD-AT, K is the number of detectors and tuned from {5, 10, 15, 20}. The budgets of PGD used to generate OOD samples are tuned from 0.1 1.0 We consider grid-search for the dimension of hidden layer of the detectors within 32, 64, 128, 256, 512 and learning rate within {0.1, 0.01, 0.001}. The threshold of the step function Γ is tuned from {0.2, 0.5, 0.6, 0.7, 0.8, 0.9}. For the GCN classifier, we follow Mujkanovic et al. (2022) to set the drop-out to 0.9, hidden size to 64, and weight decay to 0.001. For self-training, the only hyper-parameter is the number of pseudo-labels in each class, and we tune it from{20 100}.