GRAND++: Graph Neural Diffusion with A Source Term

Authors: Matthew Thorpe, Tan Minh Nguyen, Hedi Xia, Thomas Strohmer, Andrea Bertozzi, Stanley Osher, Bao Wang

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experimentally verify the above two advantages on various graph deep learning benchmark tasks, showing a significant improvement over many existing graph neural networks. In this section, we compare the performance of GRAND++ with GRAND and several other popular GNNs on various graph node classification tasks.
Researcher Affiliation Academia 1Department of Mathematics, University of Manchester, Manchester M13 9PL, UK 2Department of Mathematics, UCLA, Los Angeles, CA, 90095, USA 3Department of Mathematics, UC Davis, Davis, CA 95616, USA 4Department of Mathematics and Scientific Computing and Imaging (SCI) Institute University of Utah, Salt Lake City, UT, 84102, USA
Pseudocode No The paper explains algorithms and models using mathematical equations and descriptions, but it does not include a clearly labeled 'Algorithm' or 'Pseudocode' block.
Open Source Code No The paper does not contain any explicit statements about releasing source code or provide a link to a code repository for the methodology described.
Open Datasets Yes Following [10], we study seven graph node classification datasets, namely CORA, Cite Seer, Pub Med, Coauthor CS, Computer, Photo, and ogbn-arxiv; we describe these datasets in Appendix D.1. (See also Table 3: Summary of the graph node classification datasets.)
Dataset Splits Yes For all experiments, we run 100 splits for each dataset with 20 random seeds for each split, which are conducted on a server with four NVIDIA RTX 3090 graphics cards. All networks have 2 layers, and each experiment is run with 100 splits and 20 random seeds following [10].
Hardware Specification Yes For all experiments, we run 100 splits for each dataset with 20 random seeds for each split, which are conducted on a server with four NVIDIA RTX 3090 graphics cards.
Software Dependencies No The paper mentions using 'neural ordinary differential equations (ODEs)' and 'numerical ODE solvers' but does not specify software names with version numbers for libraries like PyTorch, TensorFlow, etc., or specific ODE solvers used for implementation.
Experiment Setup Yes Without mentioning clearly, we use the same hyperparameters that that used for GRAND in [10] for GRAND++. Table 4 lists the fine-tuned T for the results in Table 1.