Neural Sheaf Diffusion: A Topological Perspective on Heterophily and Oversmoothing in GNNs

Authors: Cristian Bodnar, Francesco Di Giovanni, Benjamin Chamberlain, Pietro Lió, Michael Bronstein

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experiments. Synthetic experiments. We consider a simple setup given by a connected bipartite graph with equally sized partitions. ... Real-world experiments. We test our models on multiple real-world datasets [47, 53, 57, 61, 66] with an edge homophily coefficient h ranging from h = 0.11 (very heterophilic) to h = 0.81 (very homophilic). ... Results. From Table 1 we see that our models are first in 5/6 benchmarks with high heterophily (h < 0.3) and second-ranked on the remaining one (i.e. Chameleon).
Researcher Affiliation Collaboration Cristian Bodnar University of Cambridge cristian.bodnar@cl.cam.ac.uk Francesco Di Giovanni Twitter fdigiovanni@twitter.com Benjamin P. Chamberlain Twitter Pietro Liò University of Cambridge Michael Bronstein University of Oxford & Twitter
Pseudocode No The paper describes methods through equations and textual explanations, but it does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our code is available at https://github.com/twitter-research/neural-sheaf-diffusion.
Open Datasets Yes We test our models on multiple real-world datasets [47, 53, 57, 61, 66] with an edge homophily coefficient h ranging from h = 0.11 (very heterophilic) to h = 0.81 (very homophilic).
Dataset Splits Yes Each split contains 48%/32%/20% of nodes per class for training, validation and testing, respectively.
Hardware Specification Yes All experiments are conducted on 8 NVIDIA A6000 GPUs.
Software Dependencies No The paper mentions 'Pytorch framework' and 'Adam optimizer' but does not specify their version numbers. While it references a specific version for a Householder transformation utility (torch-householder Version: 1.0.1), it does not provide version numbers for the main software dependencies like PyTorch itself.
Experiment Setup Yes The models are trained for 200 epochs using the Adam optimizer [38] with a learning rate of 0.01 and a weight decay of 5e-4 for all datasets. We perform early stopping with a patience of 50 epochs on the validation accuracy.