Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Understanding convolution on graphs via energies

Authors: Francesco Di Giovanni, James Rowbottom, Benjamin Paul Chamberlain, Thomas Markovich, Michael M. Bronstein

TMLR 2023 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Finally, we validate our theoretical findings through ablations and real-world experiments. In this Section we validate the theoretical results through ablations and real-world experiments on node classification tasks.
Researcher Affiliation Collaboration Francesco Di Giovanni EMAIL University of Cambridge James Rowbottom EMAIL University of Cambridge Benjamin P. Chamberlain Charm Therapeutics Thomas Markovich Cash App Michael M. Bronstein University of Oxford
Pseudocode No No explicit pseudocode or algorithm blocks are present in the paper. Methods are described through mathematical equations and textual explanations.
Open Source Code Yes Reproducibility. Source code can be found at: https://github.com/JRowbottomGit/graff.
Open Datasets Yes To validate this point, we run all three models on node classification tasks defined over graphs with varying homophily (Sen et al., 2008; Rozemberczki et al., 2021; Pei et al., 2020) (details in Appendix E).
Dataset Splits Yes Training, validation and test splits are taken from Pei et al. (2020) for all datasets for comparison.
Hardware Specification Yes Experiments were run on AWS p2.8xlarge machines, each with 8 Tesla V100-SXM2 GPUs.
Software Dependencies No The graph-convolutional models in (16) and (17) are implemented in Py Torch (Paszke et al., 2019), using Py Torch geometric Fey & Lenssen (2019) and torchdiffeq (Chen et al., 2018). Hyperparameters were tuned using wandb (Biewald, 2020) and random grid search. The paper mentions software tools used but does not provide specific version numbers for them.
Experiment Setup Yes For all the datasets shown, we performed a simple, grid search over the space m {2, 4, 8} recall that m is the depth learning rate {0.001, 0.005} and decay {0.0005, 0.005, 0.05}. We provide the hyperparameters that achieved the best results from the random grid search in Table 5.