Equivariant Hypergraph Diffusion Neural Operators

Authors: Peihao Wang, Shenghao Yang, Yunyu Liu, Zhangyang Wang, Pan Li

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate ED-HNN for node classification on nine real-world hypergraph datasets. ED-HNN uniformly outperforms the best baselines over these nine datasets and achieves more than 2% in prediction accuracy over four datasets therein. Our code is available at: https://github.com/Graph-COM/ED-HNN. We evaluate ED-HNN by performing node classsification over 9 real-world datasets that cover both heterophilic and homophilic hypergraphs. ED-HNN uniformly outperforms all baseline methods across these datasets and achieves significant improvement (>2% ) over 4 datasets therein. EDHNN also shows super robustness when going deep. We also carefully design synthetic experiments to verify the expressiveness of ED-HNN to approximate pre-defined equivariant diffusion operators.
Researcher Affiliation Academia 1University of Texas at Austin, 2Georgia Tech, 3University of Waterloo, 4Purdue University {peihaowang,atlaswang}@utexas.edu, shenghao.yang@uwaterloo.ca, liu3154@purdue.edu, panli@gatech.edu
Pseudocode Yes Algorithm 1: ED-HNN. Algorithm 2: ED-HNNII.
Open Source Code Yes Our code is available at: https://github.com/Graph-COM/ED-HNN.
Open Datasets Yes We evaluate ED-HNN on nine real-world benchmarking hypergraphs. We focus on the semi-supervised node classification task. The nine datasets include co-citation networks (Cora, Citeseer, Pubmed), co-authorship networks (Cora-CA, DBLP-CA) Yadati et al. (2019), Walmart Amburg et al. (2020), House Chodrow et al. (2021), Congress and Senate Fowler (2006b;a).
Dataset Splits Yes We randomly split the data into training/validation/test samples using 50%/25%/25% splitting percentage by following Chien et al. (2022). We run each model for ten times with different training/validation splits to obtain the standard deviation.
Hardware Specification Yes The training and testing times are test on Walmart by using the same server with one GPU NVIDIA RTX A6000.
Software Dependencies No The paper mentions using "GNN platforms Fey & Lenssen (2019); Wang et al. (2019)" and "Pytorch Geometric Library" but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes For ED-HNN, we adopt Adam optimizer with fixed learning rate=0.001 and weight decay=0.0, and train for 500 epochs for all datasets. We fix the input dropout rate to be 0.2, and dropout rate to be 0.3. For internal MLPs, we add a Layer Norm for each layer similar to (Chien et al., 2022). Other parameters regarding model sizes are obtained by grid search, which are enumerated in Table 6. The search range of layer number is {1, 2, 4, 6, 8} and the hidden dimension is {96, 128, 256, 512}.