On the Connection Between MPNN and Graph Transformer

Authors: Chen Cai, Truong Son Hy, Rose Yu, Yusu Wang

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On the empirical side, we demonstrate that 1) MPNN + VN works surprisingly well on the recently proposed Long Range Graph Benchmark (LRGB) datasets (Dwivedi et al., 2022), which arguably require long-range interaction reasoning to achieve strong performance 2) our implementation of MPNN + VN is able to further improve the early implementation of MPNN + VN on OGB datasets and 3) MPNN + VN outperforms Linear Transformer (Katharopoulos et al., 2020b) and MPNN on the climate modeling task.
Researcher Affiliation Academia Chen Cai 1 Truong Son Hy 1 Rose Yu 1 Yusu Wang 1 1University of California San Diego, San Diego, USA. Correspondence to: Chen Cai <c1cai@ucsd.edu>.
Pseudocode No The paper describes the MPNN layer, heterogeneous MPNN + VN layer, and simplified heterogeneous MPNN + VN layer using mathematical definitions (Definition 3.2, 3.3, 3.4) and text, but not in a structured pseudocode format.
Open Source Code Yes The code is available https://github. com/Chen-Cai-OSU/MPNN-GT-Connection.
Open Datasets Yes We experiment with MPNN + VN for Long Range Graph Benchmark (LRGB) datasets. Original paper (Dwivedi et al., 2022)... We tested this specific version of MPNN + VN on 4 OGB datasets... ogbg-molhiv and ogbg-molpcba (Hu et al., 2020)... ogbg-ppa (Wu et al., 2021)... ogbg-code2 (Wu et al., 2021)... OGB-LSC PCQM4Mv2 (Hu et al., 2021)... from the DOISST data proposed by (Huang et al., 2021)...
Dataset Splits Yes We train on data from years 1982 2018, validate on data from 2019 and test on data from 2020 2021. Table 10: Number of training, validation and testing examples for each setting in the task of SST prediction.
Hardware Specification No The paper does not specify any particular GPU models, CPU models, or memory details used for running its experiments.
Software Dependencies No We train all our models with 100 epochs with batch size 20, initial learning rate 10 3, and Adam optimizer (Kingma & Ba, 2014).
Experiment Setup Yes We train all our models with 100 epochs with batch size 20, initial learning rate 10 3, and Adam optimizer (Kingma & Ba, 2014). Both MPNN and MPNN + VN have 3 layers of message passing with 256 hidden dimensions. We apply an MLP with one hidden layer of 512 neurons on top of the network to make the final prediction.