Towards Dynamic Message Passing on Graphs
Authors: Junshu Sun, Chenxue Yang, Xiangyang Ji, Qingming Huang, Shuhui Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Evaluation on eighteen benchmarks demonstrates the superior performance of N2 over popular GNNs. N2 successfully scales to large-scale benchmarks and requires significantly fewer parameters for graph classification with the shared recurrent layer. |
| Researcher Affiliation | Academia | Junshu Sun1,2 Chenxue Yang3 Xiangyang Ji4 Qingming Huang1,2,5 Shuhui Wang1,5 1Institute of Computing Technology, CAS 2University of Chinese Academy of Sciences 3Agriculture Information Institute, CAAS 4Tsinghua University 5Peng Cheng Laboratory |
| Pseudocode | No | No pseudocode or algorithm blocks were found in the paper. |
| Open Source Code | Yes | Codes are available at https: //github.com/sunjss/N2. |
| Open Datasets | Yes | We adopt six benchmarks including three biochemical datasets (OGBmolpcba [27], PROTEINS [44], NCI1 [44]) and three social network datasets [44] (COLLAB, IMDB-BINARY and IMDB-MULTI). |
| Dataset Splits | Yes | Except for OGB-molpcba, we perform 10-fold cross-validation with LIB-SVM following [71] and report average performance. Early stopping regularization is employed, where we stop the training if there is no further reduction in the validation loss during 300 epochs. We apply 60%/20%/20% train/val/test random splits for Amazon and Coauthor benchmarks and follow the standard splits as the original papers for the rest of the benchmarks. |
| Hardware Specification | Yes | N2 is implemented with Py Torch [47] and Py Torch Geometric [18], and trained on a single Nvidia Geforce RTX 4090. |
| Software Dependencies | No | N2 is implemented with Py Torch [47] and Py Torch Geometric [18]. Specific version numbers for these software dependencies are not explicitly provided in the text. |
| Experiment Setup | Yes | The detailed experimental settings are presented in Appendix C. We have performed grid search for the hyper-parameters in Tab. S6...The learning rate is set to 1e-3. We adopt Adam [32] as optimizer and set weight decay as 1e-6. Early stopping regularization is employed, where we stop the training if there is no further reduction in the validation loss during 300 epochs. The maximum epoch number is set to 1,000. The batch size is set to 1,024 on OGB-molpcba, 256 on PROTEINS, NCI1, IMDB-BINARY, IMDB-MULTI, and COLLAB. The detailed hyper-parameter settings on all benchmarks are reported in Tab. S6. |