Hypergraph Structure Learning for Hypergraph Neural Networks

Authors: Derun Cai, Moxian Song, Chenxi Sun, Baofeng Zhang, Shenda Hong, Hongyan Li

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments conducted on 7 datasets show shat HSL outperforms the state-of-the-art baselines while adaptively sparsifying hypergraph structures.
Researcher Affiliation Academia 1Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China. 2School of Electronics Engineering and Computer Science, Peking University, Beijing, China. 3National Institute of Health Data Science, Peking University, Beijing, China. 4Institute of Medical Technology, Health Science Center of Peking University, Beijing, China.
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements or links to open-source code for the described methodology.
Open Datasets Yes To evaluate the effectiveness of HSL, experiments are conducted on 7 public datasets: Cora, Citeseer, Coauthor-Cora, Coauthor-DBLP [Yadati et al., 2019; Rossi and Ahmed, 2015; Sen et al., 2008], NTU2012 [Chen et al., 2003], 20Newsgroups [Zhou et al., 2006]. The statistics of the datasets are shown in Table 2.
Dataset Splits Yes The datasets are randomly split into the training, validation, and test set in a 0.5:0.25:0.25 ratio. The experiments are repeated with 20 random data splits.
Hardware Specification Yes All programs are implemented using the Pytorch Geometric library (Py G) [Fey and Lenssen, 2019] with Py Torch 1.8 on an Nvidia RTX 3090 GPU.
Software Dependencies Yes All programs are implemented using the Pytorch Geometric library (Py G) [Fey and Lenssen, 2019] with Py Torch 1.8 on an Nvidia RTX 3090 GPU.
Experiment Setup Yes The number of HGNN layers is set to 1. We use the Adam optimizer with a learning rate tuned over {0.001, 0.0001} on the validation set. The hidden dimension of HGNNs on different datasets is tuned over {64, 128, 256, 512}. The hyperparameter padd for enhancing hypergraph structure is tuned over {0, 0.01, 0.02, 0.05}.