Towards More Practical Adversarial Attacks on Graph Neural Networks
Authors: Jiaqi Ma, Shuangrui Ding, Qiaozhu Mei
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed procedure can significantly increase the mis-classification rate of common GNNs on real-world data without access to model parameters nor predictions. |
| Researcher Affiliation | Academia | School of Information, University of Michigan, Ann Arbor, Michigan, USA. Department of EECS, University of Michigan, Ann Arbor, Michigan, USA. |
| Pseudocode | Yes | Algorithm 1: The GC-RWCS Strategy for Node Selection. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing source code or a link to a code repository for the methodology described. |
| Open Datasets | Yes | We adopt three citation networks, Citeseer, Cora, and Pubmed, which are standard node classification benchmark datasets [26]. |
| Dataset Splits | Yes | Following the setup of JK-Net [25], we randomly split each dataset by 60%, 20%, and 20% for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments, such as GPU models, CPU types, or memory specifications. |
| Software Dependencies | No | The paper mentions using GCN and JK-Net models and following hyper-parameter setups from a prior work [25], but it does not specify any software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., Python version, PyTorch/TensorFlow version). |
| Experiment Setup | Yes | We set the number of layers for GCN as 2 and the number of layers for both JK-Concat and JK-Maxpool as 7. The hidden size of each layer is 32. For the proposed GC-RWCS strategy, we fix the number of step L = 4, the neighbor-hop parameter k = 1 and the parameter l = 30 for the binarized f M in Eq. (4) for all models on all datasets. |