Probabilistically Rewired Message-Passing Neural Networks

Authors: Chendi Qian, Andrei Manolache, Kareem Ahmed, Zhe Zeng, Guy Van den Broeck, Mathias Niepert, Christopher Morris

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, we demonstrate that our approach effectively mitigates issues like over-squashing and under-reaching. In addition, on established real-world datasets, our method exhibits competitive or superior predictive performance compared to traditional MPNN models and recent graph transformer architectures.
Researcher Affiliation Collaboration Chendi Qian* Computer Science Department RWTH Aachen University, Germany chendi.qian@log.rwth-aachen.de Andrei Manolache* Computer Science Department University of Stuttgart, Germany Bitdefender, Romania andrei.manolache@ki.uni-stuttgart.de Kareem Ahmed, Zhe Zeng & Guy Van den Broeck Computer Science Department University of California, Los Angeles, USA Mathias Niepert Computer Science Department University of Stuttgart, Germany Christopher Morris Computer Science Department RWTH Aachen University, Germany
Pseudocode No The paper does not contain any sections or figures explicitly labeled as 'Pseudocode' or 'Algorithm'.
Open Source Code Yes An anonymized repository of our code can be accessed at https://anonymous.4open.science/r/PR-MPNN. Our code can be accessed at https://github.com/chendiqian/PR-MPNN/.
Open Datasets Yes Datasets To answer Q1, we utilized the TREES-NEIGHBORSMATCH dataset (Alon & Yahav, 2021). Additionally, we created the TREES-LEAFCOUNT dataset... To tackle Q2, we performed experiments with the EXP (Abboud et al., 2020) and CSL datasets (Murphy et al., 2019)... To answer Q3 (a), we used the established molecular graph-level regression datasets ALCHEMY (Chen et al., 2019), ZINC (Jin et al., 2017; Dwivedi et al., 2020), OGBG-MOLHIV (Hu et al., 2020a), QM9 (Hamilton et al., 2017), LRGB (Dwivedi et al., 2022b) and five datasets from the TUDATASET repository (Morris et al., 2020). To answer Q3 (b), we used the CORNELL, WISCONSIN, TEXAS node-level classification datasets (Pei et al., 2020).
Dataset Splits Yes For the TUDATASET, we compare with the reported scores from Giusti et al. (2023b) and use the same evaluation strategy as in Xu et al. (2019); Giusti et al. (2023b), i.e., running 10-fold cross-validation and reporting the maximum average validation accuracy. We evaluate test predictive performance based on validation performance.
Hardware Specification Yes Experiments performed on a machine with a single Nvidia RTX A5000 GPU and a Intel i9-11900K CPU.
Software Dependencies No The paper does not list specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes Table 8 lists our hyperparameters choices. For all our experiments, we use early stopping with an initial learning rate of 0.001 that we decay by half on a plateau.