Spiking Graph Convolutional Networks

Authors: Zulun Zhu, Jiaying Peng, Jintang Li, Liang Chen, Qi Yu, Siqiang Luo

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental our experimental results show that the proposed method could gain competitive performance against state-of-the-art approaches. Furthermore, we show that Spiking GCN on a neuromorphic chip can bring a clear advantage of energy efficiency into graph data analysis, which demonstrates its great potential to construct environment-friendly machine learning models.To evaluate the effectiveness of the proposed Spiking GCN, we conduct extensive experiments that focus on four major objectives: (i) semi-supervised node classification on citation graphs, (ii) performance evaluation under limited training data in active learning, (iii) energy efficiency evaluation on neuromorphic chips, and (iv) extensions to other application domains.
Researcher Affiliation Academia Zulun Zhu1,2 , Jiaying Peng1 , Jintang Li 1 , Liang Chen1 , Qi Yu 2 and Siqiang Luo 3 1Sun Yat-Sen University 2Rochester Institute of Technology 3Nanyang Technological University zulun.zhu@gmail.com, {pengjy36,lijt55}@mail2.sysu.edu.cn, chenliang6@mail.sysu.edu.cn, qi.yu@rit.edu, siqiang.luo@ntu.edu.sg
Pseudocode Yes The whole process is detailed by Algorithm 1 in Appendix B.
Open Source Code Yes The code and Appendix are available on Github1. 1https://github.com/Zulun Zhu/Spiking GCN.git
Open Datasets Yes For node classification, we test our model on four commonly used citation network datasets: Cora, citeseer, ACM, and Pubmed [Wang et al., 2019]
Dataset Splits Yes For a fair comparison, we partition the data using two different ways. The first is the same as [Yang et al., 2016], which is adopted by many existing baselines in the literature. In this split method (i.e., Split I), 20 instances from each class are sampled as the training datasets. In addition, 500 and 1000 instances are sampled as the validation and testing datasets respectively. For the second data split (i.e., Split II), the ratio of training to testing is 8:2, and 20% of training samples is further used for validation.
Hardware Specification Yes GCN on TITAN... TITAN RTX, 24G and Spiking GCN on ROLLs [Indiveri et al., 2015].
Software Dependencies No The paper does not provide specific version numbers for software libraries or frameworks used in the experiments.
Experiment Setup No The paper does not explicitly provide concrete hyperparameter values (e.g., learning rate, batch size, number of epochs) or specific optimizer settings in the main text of the experimental setup.