Adversarial Attacks on Graph Neural Networks via Meta Learning
Authors: Daniel Zügner, Stephan Günnemann
ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that small graph perturbations consistently lead to a strong decrease in performance for graph convolutional networks, and even transfer to unsupervised embeddings. |
| Researcher Affiliation | Academia | Daniel Z ugner and Stephan G unnemann Technical University of Munich, Germany {zuegnerd,guennemann}@in.tum.de |
| Pseudocode | Yes | A summary of our algorithm can be found in Appendix A. |
| Open Source Code | Yes | Our code is available at https://www.kdd.in.tum.de/gnn-meta-attack. |
| Open Datasets | Yes | We evaluate our approach on the well-known CITESEER (Sen et al., 2008), CORA-ML (Mc Callum et al., 2000), and POLBLOGS (Adamic & Glance, 2005) datasets; an overview is given in Table 6. |
| Dataset Splits | No | We split the datasets into labeled (10%) and unlabeled (90%) nodes. The labels of the unlabeled nodes are never visible to the attacker or the classifiers and are only used to evaluate the generalization performance of the models. The paper only mentions labeled/unlabeled splits, which effectively act as train/test, but no separate validation split is explicitly mentioned or quantified. |
| Hardware Specification | No | The paper mentions that its complexity implies it can be executed 'using a commodity GPU' but does not specify any particular GPU model, CPU model, or other hardware details for the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers, such as Python or deep learning framework versions. |
| Experiment Setup | No | The paper mentions using gradient descent for 100 iterations for meta-gradient computation and training target classifiers (GCN, CLN) in a semi-supervised way, but does not provide specific hyperparameters like learning rates, batch sizes, or total epochs for these models, or a detailed experimental setup section. |