Exploiting Spiking Dynamics with Spatial-temporal Feature Normalization in Graph Learning
Authors: Mingkun Xu, Yujie Wu, Lei Deng, Faqiang Liu, Guoqi Li, Jing Pei
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We instantiate our methods into two spiking graph models, including graph convolution SNNs and graph attention SNNs, and validate their performance on three node-classification benchmarks, including Cora, Citeseer, and Pubmed. We instantiate the proposed framework into several models (GC-SNN and GA-SNN) and demonstrate their performance by multiple experiments on the semisupervised node classification tasks. 3 Experiments In this section, We firstly investigate the capability of our models over semi-supervised node classification on three citation datasets to examine their performance. Table 1: Performance comparison on benchmark datasets. |
| Researcher Affiliation | Academia | Center for Brain-Inspired Computing Research (CBICR), Beijing Innovation Center for Future Chip, Department of Precision Instrument, Tsinghua University {xmk18,wu-yj16,lfq18}@mails.tsinghua.edu.cn, {leideng,liguoqi,peij}@mail.tsinghua.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide concrete access to source code for the methodology described. |
| Open Datasets | Yes | We use three standard citation network benchmark datasets Cora, Pubmed, and Citeseer, where nodes represent paper documents and edges are (undirected) citation links. We summarize the dataset statistics in Supplementary Materials. Table 1: Performance comparison on benchmark datasets [Perozzi et al., 2014; Getoor, 2005; Yang et al., 2016; Defferrard et al., 2016]. |
| Dataset Splits | No | The paper states, 'We provide the experimental configuration details in Supplementary Materials,' but does not provide specific dataset split information (exact percentages, sample counts, or detailed splitting methodology) within the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers. |
| Experiment Setup | No | The paper states, 'We provide the experimental configuration details in Supplementary Materials,' indicating they are not present in the main text. It only vaguely mentions 'the same settings for GNN models (GCN, GAT) and our SNN models (GC-SNN, GA-SNN), and also keep the settings of each dataset same,' which lacks specific hyperparameter values or training configurations. |