Adversarial Attack on Graph Structured Data
Authors: Hanjun Dai, Hui Li, Tian Tian, Xin Huang, Lin Wang, Jun Zhu, Le Song
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We use both synthetic and real-world data to show that, a family of Graph Neural Network models are vulnerable to these attacks, in both graph-level and node-level classification tasks. We also show such attacks can be used to diagnose the learned classifiers. and 4. Experiment |
| Researcher Affiliation | Collaboration | 1Georgia Institute of Technology 2Ant Financial 3Tsinghua University. |
| Pseudocode | No | The paper describes algorithms using prose and mathematical equations but does not present any formal pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link regarding the public availability of its source code. |
| Open Datasets | Yes | Here we use four real-world datasets, namely the Citeseer, Cora, Pubmed and Finance. and Table 3. Statistics of the graphs used for node classification. and We use GCN (Kipf & Welling, 2016) as the target model to attack. |
| Dataset Splits | Yes | The dataset is divided into training and two test sets. The test set I contains 1,500 graphs, while test set II contains 150 graphs. and Table 3. Statistics of the graphs used for node classification. with columns Train/Test I/Test II for real-world datasets, e.g., Citeseer 120/1,000/500. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper mentions models like GNNs, structure2vec, and GCN but does not specify any software dependencies with version numbers (e.g., Python, PyTorch, or TensorFlow versions). |
| Experiment Setup | Yes | For Genetic Alg, we set the population size |P| = 100 and the number of rounds R=10. We tune the crossover rate and mutation rate in {0.1,...,0.5}. For RL-S2V, we tune the number of propagations of its S2V model K ={1,...,5}. and We choose structure2vec as the target model for attack. We also tune its number of propagation parameter K ={2,...,5}. and We use GCN (Kipf & Welling, 2016) as the target model to attack. Here the small modifications is used to regulate the attacker. That is to say, given a graph G and target node c, the adversarial samples are limited to delete single edge within 2-hops of node c. |