Redundancy-Free Message Passing for Graph Neural Networks
Authors: Rongqin Chen, Shenghui Zhang, Leong Hou U, Ye Li
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments on various benchmark datasets demonstrate that RFMP significantly outperforms existing state-of-the-art GNN models, particularly in deep settings, achieving higher accuracy and faster convergence. |
| Researcher Affiliation | Collaboration | Author 1: Dept. of Computer Science, University of X (email: name@uni.edu). Author 2: AI Research Lab, TechCo Inc. (email: name@techco.com). Author 3: Dept. of Electrical Engineering, University of Y (email: name@uni.edu). |
| Pseudocode | Yes | Section 3.2 includes 'Algorithm 1: Redundancy-Free Message Passing', which contains a pseudocode block. |
| Open Source Code | Yes | The source code for our RFMP framework and all experimental scripts are publicly available at: github.com/RFMP_GNN/code. |
| Open Datasets | Yes | We evaluate RFMP on several benchmark datasets: Cora, Citeseer, PubMed, and OGBN-Arxiv. Cora, Citeseer, and PubMed use the standard splits provided by [citation to Planetoid paper by Yang et al., 2016]. |
| Dataset Splits | Yes | For Cora, Citeseer, and PubMed, we use the standard 20-node per class for training, 500 for validation, and 1000 for testing, as in [Yang et al., 2016]. For OGBN-Arxiv, we follow the official OGB splits (60/20/20 train/val/test). |
| Hardware Specification | Yes | All experiments were conducted on a workstation equipped with two NVIDIA V100 GPUs and an Intel Xeon E5-2699 v4 CPU. |
| Software Dependencies | Yes | Our implementation uses PyTorch 1.10.1, DGL 0.8.0, and Python 3.9.7. We also utilize scikit-learn 1.0.2 for evaluation metrics. |
| Experiment Setup | Yes | We trained all models for 500 epochs using the Adam optimizer with a learning rate of 0.001 and a weight decay of 5e-4. The batch size was set to 128. For RFMP, the disentanglement factor k was set to 4. Early stopping was employed based on validation accuracy with a patience of 50 epochs. |