Dynamic Reactive Spiking Graph Neural Network
Authors: Han Zhao, Xu Yang, Cheng Deng, Junchi Yan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on various domain-related datasets can demonstrate the effectiveness of our model. Our code is available at https://github.com/hzhao98/DRSGNN. |
| Researcher Affiliation | Academia | Han Zhao1, Xu Yang1 , Cheng Deng1 , Junchi Yan2 1 School of Electronic Engineering, Xidian University, China 2 Department of Computer Science and Engineering & Mo E Key Lab of AI, Shanghai Jiao Tong University, China |
| Pseudocode | Yes | Algorithm 1: The training procedure of our proposed model |
| Open Source Code | Yes | Our code is available at https://github.com/hzhao98/DRSGNN. |
| Open Datasets | Yes | We use twelve graph datasets: Cora (Mc Callum et al. 2000), Citeseer (Sen et al. 2008), Pubmed (Namata et al. 2012), ogbn-arxiv (Hu et al. 2020), Amazon Photos, Amazon Computers, ACM (Fan et al. 2020), DBLP 1, Co-author CS (Shchur et al. 2018), Co-author Physics (Shchur et al. 2018), flickr (Huang, Li, and Hu 2017), and blogcatalog (Huang, Li, and Hu 2017). |
| Dataset Splits | Yes | In the experiments, the splitting rules in three datasets: Cora, Citeseer, and Pubmed, are following (Kipf and Welling 2016), while the one of others is following the default one in datasets. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | While the paper mentions data splitting rules and evaluation metrics, it does not provide specific experimental setup details such as concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or detailed training configurations in the main text. |