Graph Neural Networks with Adaptive Residual

Authors: Xiaorui Liu, Jiayuan Ding, Wei Jin, Han Xu, Yao Ma, Zitao Liu, Jiliang Tang

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments under various abnormal feature scenarios demonstrate the effectiveness of the proposed algorithm. In this work, we first perform empirical investigations on how representative GNN models behave on graphs with abnormal features. Specifically, based upon standard benchmark datasets, we simulate the abnormal features by replacing the features of randomly selected nodes with random Gaussian noise.
Researcher Affiliation Collaboration 1Michigan State University, East Lansing, MI, USA 2New Jersey Institute of Technology, Newark, NJ, USA 3TAL Education Group, Beijing, China
Pseudocode Yes The proposed adaptive message passing (AMP) scheme is showed in Figure 4, and a diagram is showed in Figure 3. (Figure 4 shows the step-by-step procedure of AMP)
Open Source Code Yes The implementation is available at https://github.com/lxiaorui/Air GNN.
Open Datasets Yes We conduct experiments on 8 real-world datasets including three citation graphs, i.e., Cora, Citeseer, Pubmed [21], two co-authorship graphs, i.e., Coauthor CS and Coauthor Physics [22], two co-purchase graphs, i.e., Amazon Computers and Amazon Photo [22], and one OGB dataset, i.e., ogbn-arxiv [23]. (The datasets are well-known academic benchmarks with citations.)
Dataset Splits Yes We design semi-supervised node classification experiments on three common datasets (i.e., Cora, Cite Seer and Pub Med), following the data splits in the work [3]. More details about the data statistics and data splits are summarized in Appendix B. If tuning the hyperparameter λ of Air GNN according to the validation sets after injecting abnormal features, the performance will be even better, as discussed in Appendix D.2.
Hardware Specification No The paper does not explicitly describe the specific hardware used for running experiments, such as GPU models, CPU specifications, or memory details. The authors' checklist explicitly states '[No]' for 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)?'.
Software Dependencies No The paper mentions 'Py Torch library for adversarial attacks and defenses' and 'Adam optimizer [25]' but does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes We fix the learning rate 0.01, dropout 0.8, and weight decay 0.0005. Moreover, we set γ = 1 2(1 λ) as suggested by Theorem 1. We choose K = 10 and tune λ in the range [0, 1]. Adam optimizer [25] is used in all experiments.