Totally Dynamic Hypergraph Neural Networks
Authors: Peng Zhou, Zongqian Wu, Xiangxiang Zeng, Guoqiu Wen, Junbo Ma, Xiaofeng Zhu
IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on real datasets demonstrate the effectiveness of the proposed method, compared to SOTA methods. |
| Researcher Affiliation | Academia | 1Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin 541004, China 2 University of Electronic Science and Technology of China 3Hunan University |
| Pseudocode | No | The paper describes its method using equations and text but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | The source code can be accessed via https://github.com/HHW-zhou/TDHNN. |
| Open Datasets | Yes | Following the work of the first hypergraph convolutional neural network [Feng et al., 2019], we use two visual object classification datasets (i.e., Princeton Model Net40 [Wu et al., 2015] and National Taiwan University 3D model (NTU for short)). and we use the same method as [Jiang et al., 2019] to randomly sample different proportions of the data on Cora [Veliˇckovi c et al., 2017] as the training set. |
| Dataset Splits | Yes | We adopted the same split standard for Model Net40 and NTU, i.e., 80% as the training set and 20% as the testing set. Since the standard split of a dataset uses fixed training samples, a method may be affected by the fixed data distribution. For better comparison, we use the same method as [Jiang et al., 2019] to randomly sample different proportions of the data on Cora [Veliˇckovi c et al., 2017] as the training set. Specifically, in addition to the standard split, we respectively select 2%, 5.2%, 10%, 20%, 30%, and 44% of the data to train. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper mentions software tools like 'Py Torch Geometric' and 'DHG (Deep Hypergraph)' for implementing comparison methods, but it does not specify their version numbers or other software dependencies with versions. |
| Experiment Setup | Yes | We uniformly set the feature dimension of the hyperedge de to 128 and the initial sampling number of the hyperedge m to 100. The number of nodes used to update the hyperedge features and the number of hyperedges each node belongs to are set to 10. For the hypergraph saturation score, we set the lower limit β to 0.9, and the upper limit γ is set to 0.95. We used dropout [Srivastava et al., 2014] to prevent overfitting and set the drop rate to 0.2. The optimizer we use is Adam [Kingma and Ba, 2014], and the learning rate is 0.001. |