PANDA: Expanded Width-Aware Message Passing Beyond Rewiring
Authors: Jeongwhan Choi, Sumin Park, Hyowon Wi, Sung-Bae Cho, Noseong Park
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our method outperforms existing rewiring methods, suggesting that selectively expanding the hidden state of nodes can be a compelling alternative to graph rewiring for addressing the over-squashing.In Section 5, we empirically demonstrate that our PANDA outperforms existing rewiring methods. |
| Researcher Affiliation | Collaboration | 1Yonsei University, Seoul, South Korea. 2DNI Consulting, Seoul, South Korea. 3KAIST, Daejeon, South Korea. |
| Pseudocode | No | The paper provides mathematical equations for the proposed method but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | Yes | Our source code is available here: https://github. com/jeongwhanchoi/panda. |
| Open Datasets | Yes | We consider the REDDIT-BINARY (2,000 graphs), IMDB-BINARY (1,000 graphs), MUTAG (188 graphs), ENZYMES (600 graphs), PROTEINS (1,113 graphs), and COLLAB (5.000 graphs) tasks from TUDatasets (Morris et al., 2020).We use the Peptides (15,535 graphs) dataset from the Long Range Graph Benchmark (LRGB) (Dwivedi et al., 2022). |
| Dataset Splits | Yes | For each experiment, we accumulate the result in 100 random trials with a 80%/10%/10% train/val/test split and report the mean test accuracy, along with the 95% confidence interval. |
| Hardware Specification | Yes | The following software and hardware environments were used for all experiments: UBUNTU 18.04 LTS, PYTHON 3.7.13, PYTORCH 1.11.0, PYTORCH GEOMETRIC 2.0.4, NUMPY 1.21.6, NETWORKX 2.6.3, CUDA 11.3, and NVIDIA Driver 465.19, and i9 CPU, and NVIDIA RTX 3090. |
| Software Dependencies | Yes | The following software and hardware environments were used for all experiments: UBUNTU 18.04 LTS, PYTHON 3.7.13, PYTORCH 1.11.0, PYTORCH GEOMETRIC 2.0.4, NUMPY 1.21.6, NETWORKX 2.6.3, CUDA 11.3, and NVIDIA Driver 465.19, and i9 CPU, and NVIDIA RTX 3090. |
| Experiment Setup | Yes | For each task and baseline model, we used the same settings of GNN and optimization hyperparameters across all methods to rule out hyperparameter tuning as a source of performance gain. Table 4 shows common hyperparameters. Table 5 shows the search range for hyperparameters of PANDA, and Table 6 shows the best hyperparameters used by PANDA. |