Revisiting Graph Adversarial Attack and Defense From a Data Distribution Perspective
Authors: Kuan Li, Yang Liu, Xiang Ao, Qing He
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate our claims through extensive experiments on four benchmark datasets.We conduct our experiments on four benchmark datasets including Cora, Citeseer Sen et al. (2008), Polblogs Adamic & Glance (2005), and one large-scale citation graph ogbn-arxiv Hu et al. (2020). |
| Researcher Affiliation | Academia | Kuan Li, Yang Liu, Xiang Ao , Qing He Key Lab of Intelligent Information Processing of Chinese Academy of Sciences Institute of Computing Technology, Chinese Academy of Sciences University of Chinese Academy of Sciences likuan_buaa@163.com, liuyang520ict@gmail.com, {aoxiang, heqing}@ict.ac.cn |
| Pseudocode | Yes | The algorithm of our heuristic attack is shown in Algorithm 1.This Self-Training Robust GCN (STRG) is described in Algorithm 2. |
| Open Source Code | Yes | The codes are available at https://github.com/likuanppd/STRG. |
| Open Datasets | Yes | We conduct our experiments on four benchmark datasets including Cora, Citeseer Sen et al. (2008), Polblogs Adamic & Glance (2005), and one large-scale citation graph ogbn-arxiv Hu et al. (2020). |
| Dataset Splits | Yes | All the experiments For Cora, Citeseer, and Polblogs, the data split follows 10%/10%/80% if not specified. For the ogbn-arxiv we use the public split. |
| Hardware Specification | No | The paper discusses the experimental setup and runtime, but does not specify any particular hardware components such as GPU or CPU models, memory, or cloud instance types used for the experiments. |
| Software Dependencies | No | We use Deep Robust, an adversarial attack repository Li et al. (2020), to implement Meta Attack Zügner & Günnemann (2019), PGD Xu et al. (2019a), DICE Waniek et al. (2018), Jaccard Wu et al. (2019), Simp GCN Jin et al. (2021) and Pro GNN Jin et al. (2020). We perform FGSM according to Dai et al. (2018). STABLE Li et al. (2022), GNNGuard, and Elastic Liu et al. (2021a) are implemented with the code provided by the authors. |
| Experiment Setup | Yes | All the hyper-parameters are tuned based on the loss and accuracy of the validation set. For Jaccard, the Jaccard Similarity threshold are tuned from {0.01, 0.02, 0.03, 0.04, 0.05}. For GNNGuard, Pro GNN, Simp GCN, and Elastic, we use the default hyper-parameter settings in the authors implementation. |