Learnable Surrogate Gradient for Direct Training Spiking Neural Networks

Authors: Shuang Lian, Jiangrong Shen, Qianhui Liu, Ziming Wang, Rui Yan, Huajin Tang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our work on both image datasets (CIFAR10/100) and the neuromorphic dataset, CIFAR-DVS [Li et al., 2017]. We first conduct a series of ablation experiments to verify the effectiveness of the proposed LSG method. Then we explore how the LSG method alleviates the blocking of gradient propagation during training. We finally compare our LSG method with previous methods to illustrate the superiority of our work.
Researcher Affiliation Academia Shuang Lian1 , Jiangrong Shen1 , Qianhui Liu2 , Ziming Wang1 , Rui Yan3 , Huajin Tang1,4 1College of Computer Science and Technology, Zhejiang University 2Department of Electrical and Computer Engineering, National University of Singapore 3College of Computer Science and Technology, Zhejiang University of Technology 4Zhejiang Lab
Pseudocode Yes Algorithm 1 Overall training process of the SNN with LSG method in one iteration
Open Source Code No The paper does not provide any explicit statement or link regarding the availability of its source code.
Open Datasets Yes We evaluate our work on both image datasets (CIFAR10/100) and the neuromorphic dataset, CIFAR-DVS [Li et al., 2017].
Dataset Splits No The paper mentions using CIFAR-10/100 and CIFAR-DVS datasets but does not explicitly provide specific details on training, validation, and test splits (e.g., percentages, sample counts, or predefined split citations).
Hardware Specification No The paper does not provide any specific hardware details such as GPU or CPU models, memory, or cloud computing instance types used for running the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, or CUDA versions).
Experiment Setup Yes Details of the hyperparameter settings and the network structures are introduced in Supplementary. βn is initialized to 0.2 for all layers. With the emprical experimental setting α = 1 and vth = 0.5. We conduct a set of ablation experiments to verify the effectiveness of the proposed LSG learning on CIFAR-10/100 using Res Net-19 [Zheng et al., 2021] with T = 2 and CIFARDVS using VGGSNN [Deng et al., 2022] with T=10 as backbones.