Graph Neural Networks with Learnable Structural and Positional Representations

Authors: Vijay Prakash Dwivedi, Anh Tuan Luu, Thomas Laurent, Yoshua Bengio, Xavier Bresson

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the proposed MPGNNs-LSPE architecture on the instances of sparse GNNs and Transformer GNNs defined in Section 3.2 (all models are presented in Section C), using Py Torch (Paszke et al., 2019) and DGL (Wang et al., 2019) on standard molecular benchmarks ZINC (Irwin et al., 2012), OGBG-MOLTOX21 and OGBG-MOLPCBA (Hu et al., 2020). The results of all our experiments on different instances of LSPE along with performance without using PE are presented in Table 1 whereas the comparison of the best results from Table 1 with baseline models and SOTA is shown in Table 2.
Researcher Affiliation Academia Vijay Prakash Dwivedi1 vijaypra001@e.ntu.edu.sg Anh Tuan Luu1 anhtuan.luu@ntu.edu.sg Thomas Laurent2 tlaurent@lmu.edu Yoshua Bengio3,4 yoshua.bengio@mila.quebec Xavier Bresson5 xavier@nus.edu.sg 1 Nanyang Technological University, Singapore 2 Loyola Marymount University 3 Mila, University of Montr eal 4 CIFAR 5 National University of Singapore
Pseudocode Yes Algorithm 1 Algorithm to decide whether a pair of graphs are not isomorphic based on random walk landing probabilities of each node to itself.
Open Source Code Yes 1Code: https://github.com/vijaydwivedi75/gnn-lspe
Open Datasets Yes We evaluate the proposed MPGNNs-LSPE architecture on the instances of sparse GNNs and Transformer GNNs defined in Section 3.2 (all models are presented in Section C), using Py Torch (Paszke et al., 2019) and DGL (Wang et al., 2019) on standard molecular benchmarks ZINC (Irwin et al., 2012), OGBG-MOLTOX21 and OGBG-MOLPCBA (Hu et al., 2020).
Dataset Splits Yes We use the 12,000 subset of the dataset with the same splitting defined in Dwivedi et al. (2020). We use the scaffold-split version of the dataset included in OGB (Hu et al., 2020) that consists of 7,831 graphs. It has 437,929 graphs with scaffold-split and the evaluation metric is Average Precision (AP) averaged over the tasks.
Hardware Specification Yes As for hardware information, all models were trained on Intel Xeon CPU E5-2690 v4 server having 4 Nvidia 1080Ti GPUs, with each single GPU running 1 experiment which equals to 4 parallel experiments on the machine at a single time.
Software Dependencies No The paper mentions 'using Py Torch (Paszke et al., 2019) and DGL (Wang et al., 2019)' but does not provide specific version numbers for these software packages.
Experiment Setup Yes In Table 5, additional details on the hyperparameters of different models used in Table 1 are provided. ... Init lr and Min lr are the initial and final learning rates for the learning rate decay strategy where the lr decays with a reduce Factor if the validation score doesn t improve after the Patience number of epochs. α and λ are applicable when Pos Loss is used (Eqn. 12).