On the Robustness of Graph Neural Diffusion to Topology Perturbations

Authors: Yang Song, Qiyu Kang, Sijie Wang, Kai Zhao, Wee Peng Tay

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We empirically demonstrate that graph neural PDEs are intrinsically more robust against topology perturbation as compared to other GNNs. ... We verify that the new model achieves comparable state-of-the-art performance on several benchmark datasets. 5 Experiments In this section, we compare graph neural PDEs under different flows to popular GNN architectures:
Researcher Affiliation Collaboration Yang Song Nanyang Technological University C3 AI Qiyu Kang Nanyang Technological University
Pseudocode No The paper includes architectural diagrams (e.g., Fig. 2) but does not contain explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our experiment codes are provided in https://github.com/zknus/Robustness-of-Graph-Neural-Diffusion.
Open Datasets Yes In our experiments, we use the following datasets: Cora (citation networks) [44], Citeseer (citation networks) [48] and Pub Med (biomedical literature) [49]. We use a refined version of these datasets provided by [50].
Dataset Splits Yes The remaining nodes are divided into a train set (60%) and val set (10%), for training and validation respectively.
Hardware Specification Yes Our experiments are run on a Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions using an implicit Adam PDE solver and implicitly refers to PyTorch by discussing GNNs, but it does not specify version numbers for any software dependencies.
Experiment Setup Yes The implicit Adam PDE solver with step size 2 is used for Beltrami. ... For GRAND/BLEND, we stack three neural PDE layers using the architecture proposed in Fig. 2... ...the maximum allowable injected nodes and the maximum allowable injected edges are set to 50 for Cora and Citeseer, and 300 for Pub Med.