Graph Neural Networks are Dynamic Programmers

Authors: Andrew J Dudzik, Petar Veličković

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We hope our exposition will serve as a foundation for building stronger algorithmically aligned GNNs. Exposing this connection, we easily verify several prior findings in the literature, produce better-grounded GNN architectures for edge-centric tasks, and demonstrate empirical results on the CLRS algorithmic reasoning benchmark. We also provided empirical evidence of the utility of polynomial spans for analysing GNN architectures, especially in terms of algorithmic alignment.
Researcher Affiliation Industry Andrew Dudzik Deep Mind adudzik@deepmind.com Petar Veliˇckovi c Deep Mind petarv@deepmind.com
Pseudocode No dp[x] recombine(score(dp[y], dp[x]) for y in expand(x)) (2)
Open Source Code No Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [No] We aim to release the code at a future point. The data is publicly available.
Open Datasets Yes We initially use a set of six tasks from the recently proposed CLRS Algorithmic Reasoning Benchmark [32], which evaluates how well various (G)NNs align to classical algorithms, both in- and out-of-distribution. We reuse exactly the data generation and base model implementations in the publicly available code for the CLRS benchmark.
Dataset Splits No We initially use a set of six tasks from the recently proposed CLRS Algorithmic Reasoning Benchmark [32], which evaluates how well various (G)NNs align to classical algorithms, both in- and out-of-distribution. We reuse exactly the data generation and base model implementations in the publicly available code for the CLRS benchmark.
Hardware Specification No The paper does not explicitly state specific hardware details such as GPU models, CPU types, or memory amounts used for experiments in the main body.
Software Dependencies No We reused exactly the data generation and base model implementations in the publicly available code for the CLRS benchmark.
Experiment Setup Yes We implemented each of these options by making our GNN s message and update functions be two-layer MLPs with embedding dimension 24, and hidden layers of size 8 and 16. Lastly, we scale up our experiments to 27 different tasks in CLRS, 96-dimensional embeddings, and using the PGN processor [29]