Affinity-Aware Graph Networks

Authors: Ameya Velingker, Ali Sinop, Ira Ktena, Petar Veličković, Sreenivas Gollapudi

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We propose message passing networks based on these features and evaluate their performance on a variety of node and graph property prediction tasks. Our architecture has low computational complexity, while our features are invariant to the permutations of the underlying graph. The measures we compute allow the network to exploit the connectivity properties of the graph, thereby allowing us to outperform relevant benchmarks for a wide variety of tasks, often with significantly fewer message passing steps. We evaluate our networks on a number of benchmark datasets of diverse scales (see Section 5).
Researcher Affiliation Industry Ameya Velingker Google Research ameyav@google.com Ali Kemal Sinop Google Research asinop@google.com Ira Ktena Google Deep Mind iraktena@google.com Petar Veliˇckovi c Google Deep Mind petarv@google.com Sreenivas Gollapudi Google Research sgollapu@google.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks labeled as 'Algorithm' or 'Pseudocode'.
Open Source Code No The paper does not include an unambiguous statement where the authors state they are releasing the code for the work described in this paper, nor does it provide a direct link to a source-code repository containing their implementation.
Open Datasets Yes The first dataset we explore is the PNA dataset [9], which captures a multimodal setting. The ogbg-molhiv dataset is a molecular property prediction dataset comprised of molecular graphs without spatial information (such as atom coordinates). The ogbg-molpcba dataset comprises molecular graphs without spatial information (such as atom coordinates). Next, we present results on ogbn-arxiv and ogbn-mag, transductive datasets with large graphs. We finally include experimental results for one of the largest-scale publicly available graph regression tasks: the PCQM4Mv1 dataset from the OGB Large Scale Challenge [20].
Dataset Splits Yes We report the single-model validation performance on this dataset, in line with previous works [17, 48, 1]. (Section 5.5) Table 5 explicitly lists "Validation MAE". The paper also presents results on "Test % AUC-ROC" and "Test Mean Average Precision" for other datasets, indicating that train/validation/test splits are used.
Hardware Specification Yes Using the combinatorial multigrid preconditioner [27, 25], we constructed the effective resistances on this graph in an hour on a standard Mac Book Pro 2019 laptop.
Software Dependencies No The paper mentions that 'All of our models have been implemented using the jraph library [16].', but it does not specify version numbers for jraph or any other software dependencies, which is required for reproducibility.
Experiment Setup Yes In this section we provide the hyperparameters used for the different models on the PNA multitask benchmark. We train all models for 2000 steps and with 3 layers. The remaining hyperparameters for hidden size of each layer, learning rate, number of message passing steps (only valid for MPNN models), number of rotation matrices and same example frequency (when relevant) are provided in Table 6. (Appendix A) Table 6 itself provides specific values for these hyperparameters.