Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Rewiring with Positional Encodings for Graph Neural Networks

Authors: Rickard Brüel Gabrielsson, Mikhail Yurochkin, Justin Solomon

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate performance on six benchmark graph datasets: ZINC, AQSOL, PATTERN, CLUSTER, MNIST, and CIFAR10 from (Dwivedi et al., 2020).
Researcher Affiliation Collaboration Rickard Brüel-Gabrielsson EMAIL Massachusetts Institute of Technology MIT-IBM Watson AI Lab Mikhail Yurochkin EMAIL MIT-IBM Watson AI Lab IBM Research Justin Solomon EMAIL Massachusetts Institute of Technology MIT-IBM Watson AI Lab
Pseudocode No The paper includes mathematical formulas in Section B 'Transformer Implementation' but does not present structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No Both code and training follow Dwivedi et al. (2020) closely, and to a lesser extent (Dwivedi and Bresson, 2021), which uses the same code base.
Open Datasets Yes We evaluate performance on six benchmark graph datasets: ZINC, AQSOL, PATTERN, CLUSTER, MNIST, and CIFAR10 from (Dwivedi et al., 2020).
Dataset Splits Yes Both code and training follow Dwivedi et al. (2020) closely, and to a lesser extent (Dwivedi and Bresson, 2021), which uses the same code base. ... Training also stops if for a certain number of epochs the validation loss does not improve (Dwivedi et al., 2020).
Hardware Specification Yes we use similar compute to their work via a single Tesla V100 GPU.
Software Dependencies No The paper mentions using the 'Adam optimizer (Kingma and Ba, 2015)' but does not provide specific version numbers for software libraries or frameworks used in the implementation.
Experiment Setup Yes Like (Dwivedi et al., 2020), we use the Adam optimizer (Kingma and Ba, 2015) with the same learning rate decay strategy. The initial learning rate is set to 10^-3 and is reduced by half if the validation loss does not improve after a fixed ('lr_schedule_patience') number of epochs, either 5 or 10. Instead of setting a maximum number of epochs, the training is stopped either when the learning rate has reached 10^-6 or when the computational time reaches 12 hours (6 hours for Neighbors Match). Experiments are run with 4 different seeds; we report summary statistics from the 4 results.