Are Defenses for Graph Neural Networks Robust?
Authors: Felix Mujkanovic, Simon Geisler, Stephan Günnemann, Aleksandar Bojchevski
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform a thorough robustness analysis of 7 of the most popular defenses spanning the entire spectrum of strategies, i.e., aimed at improving the graph, the architecture, or the training. The results are sobering most defenses show no or only marginal improvement compared to an undefended baseline. We provide the code, configurations, and a collection of perturbed graphs on the project website linked on the first page. |
| Researcher Affiliation | Academia | Felix Mujkanovic1 , Simon Geisler1 , Stephan Günnemann1, Aleksandar Bojchevski2 1Dept. of Computer Science & Munich Data Science Institute, Technical University of Munich 2CISPA Helmholtz Center for Information Security |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We provide the code, configurations, and a collection of perturbed graphs on the project website linked on the first page. Project page: https://www.cs.cit.tum.de/daml/are-gnn-defenses-robust/ |
| Open Datasets | Yes | We use the two most widely used datasets in the literature, namely Cora ML [2] and Citeseer [19] |
| Dataset Splits | Yes | We repeat the experiments for five different data splits (10% training, 10% validation, 80% testing) and report the means and variances. |
| Hardware Specification | Yes | We use an internal cluster with Nvidia GTX 1080Ti GPUs. |
| Software Dependencies | No | The paper does not provide specific version numbers for software dependencies. |
| Experiment Setup | Yes | Defense hyperparameters. When first attacking the defenses, we observed that many exhibit poor robustness using the hyperparameters provided by their authors. To not accidentally dismiss a defense as non-robust, we tune the hyperparameters such that the clean accuracy remains constant but the robustness w.r.t. adaptive attacks is improved. We report the configurations and verify the success of our tuning in H. |