Spiking Graph Neural Network on Riemannian Manifolds

Authors: Li Sun, Zhenhao Huang, Qiqi Wan, Hao Peng, Philip S Yu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on common graphs show the proposed MSG achieves superior performance to previous spiking GNNs and energy efficiency to conventional GNNs.
Researcher Affiliation Academia Li Sun North China Electric Power University Beijing 102206, China ccesunli@ncepu.edu.cn Zhenhao Huang North China Electric Power University Beijing 102206, China huangzhenhao@necpu.edu.cn Qiqi Wan North China Electric Power University Beijing 102206, China wanqiqi@ncepu.edu.cn Hao Peng Beihang University Beijing 100191, China penghao@buaa.edu.cn Philip S. Yu University of Illinois at Chicago IL, USA psyu@uic.edu
Pseudocode Yes Algorithm 1 Training MSG by the proposed Differentiation via Manifold
Open Source Code Yes 2Codes are available at https://github.com/Zhenh Huang/MSG
Open Datasets Yes Our experiments are conducted on 4 commonly used benchmark datasets including two popular co-purchase graphs: Computers and Photo[53], and two co-author graphs: CS and Physics [53]. Datasets are detailed in Appendix E.
Dataset Splits No The paper refers to common benchmark datasets and evaluation protocols but does not explicitly state the training, validation, and test splits (e.g., percentages or sample counts) within its own text.
Hardware Specification Yes Experiments are conducted on the hardware of NVIDIA Ge Force RTX 4090 GPU 24GB memory, and AMD EPYC 9654 CPU with 96-Core Processor.
Software Dependencies No Our model is built upon Geo Opt [56], Spiking Jelly [56] and Py Torch [57].
Experiment Setup Yes The dimension of the representation space is set as 32. The manifold spiking neuron is based on the IF model [49] by default... The time steps T for neurons is set to 5 or 15. The step size ϵ in Eq. 8 is set to 0.1. The hyperparameters are tuned with grid search, in which the learning rate is {0.01, 0.003} for node classification and {0.003, 0.001} for link prediction, and the dropout rate is in {0.1, 0.3, 0.5}.