Adversarial Examples for Graph Data: Deep Insights into Attack and Defense
Authors: Huijun Wu, Chen Wang, Yuriy Tyshetskiy, Andrew Docherty, Kai Lu, Liming Zhu
IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on a number of datasets show the effectiveness of the proposed techniques. |
| Researcher Affiliation | Academia | 1University of New South Wales, Australia 2Data61, CSIRO 3National University of Defense Technology, China |
| Pseudocode | Yes | Algorithm 1 shows the pseudo-code for untargeted IGJSMA attack. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code or links to a code repository. |
| Open Datasets | Yes | We use the widely used CORA-ML [Mc Callum et al., 2000], CITESEER [Bojchevski and G unnemann, 2018] and Polblogs [Adamic and Glance, 2005] datasets. |
| Dataset Splits | Yes | We split each graph into a labeled (20%) set and an unlabeled set of nodes (80%). Among the labeled nodes, half of them are used for training while the rest are used for validation. |
| Hardware Specification | No | The paper does not specify any particular hardware (e.g., GPU, CPU model) used for running the experiments. It only mentions 'our non-optimized Python implementation'. |
| Software Dependencies | No | The paper mentions 'Python implementation' but does not provide specific software dependencies with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x). |
| Experiment Setup | No | The paper mentions training a 'two-layer GCN' but does not provide specific hyperparameters such as learning rate, batch size, number of epochs, or optimizer details required for reproduction. |