A Generalized Neural Diffusion Framework on Graphs
Authors: Yibo Li, Xiao Wang, Hongrui Liu, Chuan Shi
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experimental results well demonstrate the effectiveness of Hi D-Net over state-of-the-art graph diffusion networks. Experiments Node Classification Datasets. For comprehensive comparison, we use seven realworld datasets to evaluate the performance of node classification. |
| Researcher Affiliation | Collaboration | 1Beijing University of Posts and Telecommunications 2Beihang University 3Ant Group |
| Pseudocode | No | The paper provides mathematical equations and descriptions of the model but does not include structured pseudocode or an algorithm block. |
| Open Source Code | No | For other baseline models: GCN, GAT, APPNP, GRAND, GRAND++, DGC, and ADC, we follow the parameters suggested by (Kipf and Welling 2016a; Veliˇckovi c et al. 2017; Klicpera, Bojchevski, and Günnemann 2018; Chamberlain et al. 2021; Thorpe et al. 2022; Wang et al. 2021; Zhao et al. 2021) on Cora, Citeseer, and Pubmed, and carefully fine-tune them to get optimal performance on Chameleon, Squirrel, and Actor. They are implemented based on their open repositories, where the code can be found in Appendix. (This refers to baselines being from open repositories, not the authors' Hi D-Net code.) The paper does not state that the code for Hi D-Net is open-source. |
| Open Datasets | Yes | Datasets. For comprehensive comparison, we use seven realworld datasets to evaluate the performance of node classification. They are three citation graphs, i.e., Cora, Citeseer, Pubmed (Kipf and Welling 2016a), two Wikipedia networks, i.e., Chameleon and Squirrel (Pei et al. 2020), one Actor co-occurrence network Actor (Pei et al. 2020), one Open Graph Benchmark(OGB) graph ogbn-arxiv(Hu et al. 2020). |
| Dataset Splits | No | The paper mentions running experiments multiple times and performing hyperparameter search (implying validation), but it does not explicitly provide the specific training/validation/test dataset split percentages or sample counts in the main text. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware specifications (e.g., GPU or CPU models) used for conducting the experiments. |
| Software Dependencies | No | The paper does not specify the software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or other libraries) used for the experiments. |
| Experiment Setup | Yes | We perform a hyperparameter search for Hi D-Net on all datasets and the details of hyperparameter can be seen in Appendix. For other baseline models: GCN, GAT, APPNP, GRAND, GRAND++, DGC, and ADC, we follow the parameters suggested by [citations]... and carefully fine-tune them to get optimal performance... |