Hypergraph Dynamic System
Authors: Jielong Yan, Yifan Feng, Shihui Ying, Yue Gao
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on 9 datasets demonstrate HDSode beat all compared methods. HDSode achieves stable performance with increased layers and solves the poor controllability of HGNNs. We also provide the feature visualization of the evolutionary process to demonstrate the controllability and stabilization of HDSode. |
| Researcher Affiliation | Academia | Jielong Yan1, Yifan Feng1, Shihui Ying2 & Yue Gao1 1School of Software, BNRist, THUIBCS, BLBCI, Tsinghua University 2Department of Mathematics, School of Science, Shanghai University |
| Pseudocode | Yes | A THE ALGORITHM OF HDSode Algorithm 1 The algorithm of HDSode framework |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing the code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | we employ 9 publicly accessible hypergraph benchmark datasets from existing research on hypergraph neural networks, including Cora-CA and DBLP-CA from Yadati et al. (2019), News20 from Asuncion & Newman (2007), IMDB4k-CA and IMDB4k-CD from Fu et al. (2020), DBLP4k-CC and DBLP4k-CP from Sun et al. (2011), Cooking from Gao et al. (2022), and NTU from Chen et al. (2003). |
| Dataset Splits | Yes | In both settings, we fix the total number of known label vertices in the training set and the validation set, which contains a total of 1, 500 vertices including 10 vertices per class for training. Vertices not in the training set and validation set are for test. The training, validation, and test data for each experiment are divided five times at random, and the average performance and standard deviation of each method are reported for fair comparisons. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions using the “Adam optimizer” but does not specify any software versions for programming languages, libraries, or frameworks used in the implementation. |
| Experiment Setup | Yes | For graph neural networks and hypergraph neural networks, the number of network layers is set to 2 to prevent over-smoothing, and the dimension of hidden layers is 64. ... In HDSode, the control term time interval is set to 20, the termination time T is set to 40, and the search range for the hyperparameters αv, αe in ODE is set to {0.05, 0.1, . . . , 0.95}. ... All models are trained for 200 epochs using Adam optimizer with learning rate in {10 2, 10 3}, weight decay in {5 10 4, 1 10 4, 5 10 5}, and dropout in {0.05, 0.1, . . . 0.95}. The loss function is cross-entropy loss. We randomly partition the dataset five times, select hyperparameters based on the average accuracy under the validation set, and report the average test accuracy and standard deviation under the hyperparameters. Table S4 provides the hyperparameter details of HDSode in the transductive setting. |