Robust Graph Neural Networks via Unbiased Aggregation

Authors: Zhichao Hou, Ruiqi Feng, Tyler Derr, Xiaorui Liu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our comprehensive experiments confirm the strong robustness of our proposed model under various scenarios, and the ablation study provides a deep understanding of its advantages. Our code is available at https://github.com/chris-hzc/RUNG.
Researcher Affiliation Academia Zhichao Hou1 Ruiqi Feng1 Tyler Derr2 Xiaorui Liu1 1North Carolina State University, 2Vanderbilt University
Pseudocode No The paper describes algorithms using mathematical equations and prose but does not provide structured pseudocode or an algorithm block.
Open Source Code Yes Our code is available at https://github.com/chris-hzc/RUNG.
Open Datasets Yes We test our RUNG with the node classification task on two widely used real-world citation networks, Cora ML and Citeseer [29], as well as a large-scale networks Ogbn-Arxiv [30].
Dataset Splits Yes We adopt the data split of 10% training, 10% validation, and 80% testing, and report the classification accuracy of the attacked nodes following [5].
Hardware Specification No The paper does not provide specific details on the hardware used, such as GPU or CPU models.
Software Dependencies No The paper mentions libraries/frameworks implicitly through citations but does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The model hyperparameters including learning rate, weight decay, and dropout rate are tuned as in [5]. Other hyperparameters follow the settings in the original papers. RUNG uses an MLP connected to 10 graph aggregation layers following the decoupled GNN architecture of APPNP. ˆλ = 1 / (1 + λ) is tuned in {0.7, 0.8, 0.9}, and γ tuned in {0.5, 1, 2, 3, 5}. We chose the hyperparameter setting that yields the best robustness without a notable impact (smaller than 1%) on the clean accuracy following [35].