Locality-Aware Graph Rewiring in GNNs
Authors: Federico Barbero, Ameya Velingker, Amin Saberi, Michael M. Bronstein, Francesco Di Giovanni
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We propose a novel rewiring framework that satisfies all of (i) (iii) through a locality-aware sequence of rewiring operations. We then discuss a specific instance of such rewiring framework and validate its effectiveness on several real-world benchmarks, showing that it either matches or significantly outperforms existing rewiring approaches. In Section 6 we validate LASER on different tasks, attaining performance that is on par or superior to existing rewiring techniques. |
| Researcher Affiliation | Collaboration | Federico Barbero1 , Ameya Velingker2, Amin Saberi3, Michael Bronstein1, Francesco Di Giovanni1 1University of Oxford, Department of Computer Science 2Google Research 3Stanford University, Department of Management Science and Engineering |
| Pseudocode | Yes | Algorithm 1 Fast µ, ν Computation and Algorithm 2 LASER rewiring for locality value r are provided in Appendix D.1. |
| Open Source Code | Yes | Reproducibility statement. We release our code on the following URL https://github.com/Fedzbar/laser-release under the MIT license. |
| Open Datasets | Yes | We consider the Peptides-struct, Peptides-func, and PCQM-Contact tasks from the Long Range Graph Benchmark (LRGB) (Dwivedi et al., 2022). We consider the REDDIT-BINARY, IMDB-BINARY, MUTAG, ENZYMES, PROTEINS, and COLLAB tasks from TUDatasets (Morris et al., 2020). |
| Dataset Splits | Yes | For Peptides we use a 70%/15%/15% train/test/split, while for PCQM-Contact we use a 90%/5%/5% split. We train for 100 epochs over 25 random seeds with a 80%/10%/10% train/val/test split. |
| Hardware Specification | Yes | Hardware. Experiments were ran on 2 machines with 4 NVIDIA Tesla T4 (16GB) GPU, 16 core Intel(R) Xeon(R) CPU (2.00GHz), and 40 GB of RAM, hosted on the Google Cloud Platform (GCP). For the PQCM-Contact experiments we increased the RAM to 80GB and the CPU cores to 30. |
| Software Dependencies | No | The paper mentions software components and algorithms like ADAM, Batch Norm, ReLU, and GCN, but does not provide specific version numbers for the software libraries or frameworks used (e.g., PyTorch, TensorFlow, Python version). |
| Experiment Setup | Yes | For the LRGB experiments, we use the same hyper-parameters and configurations provided by Dwivedi et al. (2020), respecting a 500k parameter budget in all the experiments. We lightly manually tune the number of snapshots with values L {2, 3, 4, 5} and the density with values {1/10, 1/4, 1/2} for LASER. For FOSR, SDRF, and GTR we search the number of iterations from {5, 20, 40}. For the TUDatasets experiments, we use ADAM (Kingma & Ba, 2015) with default settings and use the Reduce LROn Plateau scheduler with a patience of 20, a starting learning rate of 0.001, a decay factor of 1/2, and a minimum learning rate of 1 10 5. We apply Batch Norm (Ioffe & Szegedy, 2015), use Re LU as an activation function, and fix the hidden dimension to 64. We do not use dropout, avoid using a node encoder and use a weak (linear) decoder to more accurately compare the various rewiring methods. |