GNNGuard: Defending Graph Neural Networks against Adversarial Attacks

Authors: Xiang Zhang, Marinka Zitnik

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Across five GNNs, three defense methods, and four datasets, including a challenging human disease graph, experiments show that GNNGUARD outperforms existing defense approaches by 15.3% on average.
Researcher Affiliation Academia Xiang Zhang Harvard University xiang_zhang@hms.harvard.edu Marinka Zitnik Harvard University marinka@hms.harvard.edu
Pseudocode Yes Algorithm 1: GNNGUARD.
Open Source Code Yes Code and datasets are available at https://github.com/mims-harvard/GNNGuard.
Open Datasets Yes We use two citation networks with undirected edges and binary features: Cora [48] and Citeseer [49]. We also consider a directed graph with numeric node features, ogbn-arxiv [50], representing a citation network of CS papers published between 1971 and 2014. We use a Disease Pathway (DP) [51] graph with continuous features describing a system of interacting proteins whose malfunction collectively leads to diseases.
Dataset Splits No The paper discusses how target nodes are selected for adversarial attacks and the overall test set (Vtest), but it does not provide explicit training, validation, and test dataset split percentages or counts for the general graph datasets used.
Hardware Specification No The paper does not specify any particular hardware components (e.g., GPU models, CPU types, or memory) used for conducting the experiments.
Software Dependencies No The paper mentions using various GNN models (GCN, GAT, GIN, etc.) but does not provide specific version numbers for any software, libraries, or programming languages used in the experiments.
Experiment Setup Yes In Mettack, we set the perturbation rate as 20% (i.e., = 0.2E) with Meta-Self training strategy. In Nettack-Di, = ˆN 0 u. In Nettack-In, we perturb 5 neighbors of the target node and set = ˆN 0 v for all neighbors.