Adversarial Weight Perturbation Improves Generalization in Graph Neural Networks

Authors: Yihan Wu, Aleksandar Bojchevski, Heng Huang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments to show the effect of WT-AWP on the natural and robustness performance of different GNNs for both node classification and graph classification tasks.
Researcher Affiliation Academia 1 Electrical and Computer Engineering, University of Pittsburgh, PA, USA 2 CISPA Helmholtz Center for Information Security
Pseudocode Yes Algorithm 1: WT-AWP: Weighted Truncated Adversarial Weight Perturbation
Open Source Code No No explicit statement or link is provided for the authors' own open-source code.
Open Datasets Yes Datasets. We use three benchmark datasets, including two citation networks, Cora and Citeseer (Sen et al. 2008), and one blog dataset Polblogs (Adamic and Glance 2005).
Dataset Splits Yes We use 10% nodes for training, 10% for validating and the rest 80% for testing.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) are provided for running experiments.
Software Dependencies No The paper mentions using "Pytorch Geometric (Fey and Lenssen 2019) and Deep-Robust (Li et al. 2020)" but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes To achieve fair comparison we keep the same training settings for all models. We use a 2-layer structure... For GCN and PPNP, the hidden dimensionality is 64; for GAT, we use 8 heads with size 8. We choose K = 10, α = 0.1 in PPNP.