Sheaf Hypergraph Networks

Authors: Iulia Duta, Giulia Cassarà, Fabrizio Silvestri, Pietro Lió

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Through extensive experimentation, we show that this generalization significantly improves performance, achieving top results on multiple benchmark datasets for hypergraph node classification.
Researcher Affiliation Academia Iulia Duta University of Cambridge id366@cam.ac.uk Giulia Cassarà University of Rome, La Sapienza giulia.cassara@uniroma1.it Fabrizio Silvestri University of Rome, La Sapienza fabrizio.silvestri@uniroma1.it Pietro Liò University of Cambridge pl219@cam.ac.uk
Pseudocode No The paper describes computational procedures but does not include formal pseudocode or algorithm blocks.
Open Source Code No The paper does not provide a direct link or explicit statement for the availability of the source code for its own methodology. The only code link refers to a baseline method, HNHN: “Code available: https://github.com/twistedcubic/HNHN.” which is not their own work.
Open Datasets Yes We evaluate our model on eight real-world datasets that vary in domain, scale, and heterophily level and are commonly used for benchmarking hypergraphs. These include Cora, Citeseer, Pubmed, Cora-CA, DBLP-CA [37], House [52], Senate and Congress [53].
Dataset Splits Yes To ensure a fair comparison with the baselines, we follow the same training procedures used in [50] by randomly splitting the data into 50% training samples, 25% validation samples and 25% test samples, and running each model 10 times with different random splits.
Hardware Specification Yes The experiments are executed on a single NVIDIA Quadro RTX 8000 with 48GB of GPU memory.
Software Dependencies No The paper mentions “PyTorch” indirectly when discussing a fix for Hyper GCN code, but does not provide specific version numbers for PyTorch or any other software dependencies used in its experiments.
Experiment Setup No Details on all the model choices and hyper-parameters can be found in the Supplementary Material.