Adversarial Robustness in Graph Neural Networks: A Hamiltonian Approach

Authors: Kai Zhao, Qiyu Kang, Yang Song, Rui She, Sijie Wang, Wee Peng Tay

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The adversarial robustness of different neural flow GNNs is empirically compared on several benchmark datasets under a variety of adversarial attacks. Extensive numerical experiments demonstrate that GNNs leveraging conservative Hamiltonian flows with Lyapunov stability substantially improve robustness against adversarial perturbations.
Researcher Affiliation Collaboration Kai Zhao Nanyang Technological University Qiyu Kang Nanyang Technological University Yang Song C3 AI, Singapore Rui She Nanyang Technological University Sijie Wang Nanyang Technological University Wee Peng Tay Nanyang Technological University
Pseudocode Yes Algorithm 1: Graph Node Embedding Learning with HANG
Open Source Code Yes The implementation code of experiments is available at https://github.com/zknus/Neur IPS-2023-HANG-Robustness.
Open Datasets Yes Our datasets include citation networks (Cora, Citeseer, Pubmed) [51], the Coauthor academic network [52], an Amazon co-purchase network (Computers) [52], and the Ogbn-Arxiv dataset [53].
Dataset Splits Yes For inductive learning, we follow the data splitting method in the GRB framework [54], with 60% for training, 10% for validation, and 20% for testing.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts) used for running experiments were mentioned. Only general statements like 'on a cluster' were found.
Software Dependencies No The ODE is solved using the solver from [18]. We consider fixed-step Euler and RK4, along with adaptive-step Dopri5, from [18], and Symplectic-Euler from [35]. While specific solvers are named, overall software dependencies (e.g., Python, PyTorch versions) are not provided with specific version numbers.
Experiment Setup Yes The raw node features are compressed to a fixed dimension, such as 64, using a fully connected (FC) layer to generate the initial features q(0)...For all datasets, we establish the time T as 3 and maintain a fixed step size of 1. This setup aligns with a fair comparison to three-layer GNNs.