Hypergraph Neural Networks
Authors: Yifan Feng, Haoxuan You, Zizhao Zhang, Rongrong Ji, Yue Gao3558-3565
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have conducted experiments on citation network classification and visual object recognition tasks and compared HGNN with graph convolutional networks and other traditional methods. Experimental results demonstrate that the proposed HGNN method outperforms recent state-of-the-art methods. |
| Researcher Affiliation | Academia | Yifan Feng,1 Haoxuan You,3 Zizhao Zhang,3 Rongrong Ji,1,2 Yue Gao3 1Fujian Key Laboratory of Sensing and Computing for Smart City, Department of Congnitive Science School of Information Science and Engineering, Xiamen University, 361005, China 2Peng Cheng Laboratory, China 3BNRist, KLISS, School of Software, Tsinghua University, 100084, China. {evanfeng97, haoxuanyou}@gmail.com, rrji@xmu.edu.cn, {zz-z14,gaoyue}@tsinghua.edu.cn |
| Pseudocode | No | The paper describes the proposed method using mathematical formulations and figures but does not provide structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not contain any explicit statements about making the source code available or provide a link to a code repository. |
| Open Datasets | Yes | Here, two widely used citation network datasets, i.e., Cora and Pubmed (Sen et al. 2008) are employed. [...] Two public benchmarks are employed here, including the Princeton Model Net40 dataset (Wu et al. 2015) and the National Taiwan University (NTU) 3D model dataset (Chen et al. 2003) |
| Dataset Splits | Yes | Table 1: Summary of the citation classification datasets. Training node 140 60 Validation node 500 500 Testing node 1000 1000 [...] the same training/testing split is applied as introduced in (Wu et al. 2015), where 9,843 objects are used for training and 2,468 objects are used for testing. [...] In the NTU dataset, 80% data are used for training and the other 20% data are used for testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments. |
| Software Dependencies | No | The paper mentions specific optimizers and activation functions but does not list any software libraries or dependencies with specific version numbers (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | Yes | The feature dimension of the hidden layer is set as 16 and the dropout (Srivastava et al. 2014) is employed to avoid overfitting with drop rate p = 0.5. We choose the Re LU as the nonlinear activation function. During the training process, we use Adam optimizer (Kingma and Ba 2014) to minimize our cross-entropy loss function with a learning rate of 0.001. |