Understanding and Improving Graph Injection Attack by Promoting Unnoticeability
Authors: Yongqiang Chen, Han Yang, Yonggang Zhang, MA KAILI, Tongliang Liu, Bo Han, James Cheng
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments verify that GIA with HAO can break homophily-based defenses and outperform previous GIA attacks by a significant margin. We believe our methods can serve for a more reliable evaluation of the robustness of GNNs. Extensive experiments with 38 defense models on 6 benchmarks demonstrate that GIA with HAO can break homophily defenders and significantly outperform all previous works across all settings, including both non-target attack and targeted attack. |
| Researcher Affiliation | Academia | 1The Chinese University of Hong Kong 2Hong Kong Baptist University {yqchen,hyang,klma,jcheng}@cse.cuhk.edu.hk csygzhang@comp.hkbu.edu.hk Tongliang Liu3, Bo Han2, James Cheng1 3The University of Sydney tongliang.liu@sydney.edu.au bhanml@comp.hkbu.edu.hk |
| Pseudocode | Yes | Algorithm 1: AGIA: Adaptive Graph Injection Attack with Gradient. Algorithm 2: Seq GIA: Sequential Adaptive Graph Injection Attack. |
| Open Source Code | Yes | Code is available in https://github.com/LFhase/GIA-HAO. |
| Open Datasets | Yes | Datasets. We comprehensively evaluate our methods with 38 defense models on 6 datasets. We select two classic citation networks Cora and Citeseer (Yang et al., 2016; Giles et al., 1998) refined by GRB (Zheng et al., 2021). We also use Aminer and Reddit (Tang et al., 2008; Hamilton et al., 2017b; Zeng et al., 2020) from GRB, Arxiv from OGB (Hu et al., 2020), and a co-purchasing network Computers (Mc Auley et al., 2015) to cover more domains and scales. Details are in Appendix H.1. |
| Dataset Splits | Yes | By default, we set total training epochs as 400 and employ the early stop of 100 epochs according to the validation accuracy. For final model selection, we select the final model with best validation accuracy. |
| Hardware Specification | Yes | We ran our experiments on Linux Servers with 40 cores Intel(R) Xeon(R) Silver 4114 CPU @ 2.20GHz, 256 GB Memory, and Ubuntu 18.04 LTS installed. One has 4 NVIDIA RTX 2080Ti graphics cards with CUDA 10.2 and the other has 2 NVIDIA RTX 2080Ti and 2 NVIDIA RTX 3090Ti graphics cards with CUDA 11.3. |
| Software Dependencies | Yes | We implement our methods with Py Torch (Paszke et al., 2019) and Py Torch Geometric (Fey & Lenssen, 2019). ...CUDA 10.2 and ...CUDA 11.3. |
| Experiment Setup | Yes | By default, all GNNs used in our experiments have 3 layers, a hidden dimension of 64 for Cora, Citeseer, and Computers, a hidden dimension of 128 for the rest medium to large scale graphs. We also adopt dropout (Srivastava et al., 2014) with dropout rate of 0.5 between each layer. The optimizer we used is Adam (Kingma & Ba, 2015) with a learning rate of 0.01. By default, we set total training epochs as 400 and employ the early stop of 100 epochs according to the validation accuracy. |