ESL-SNNs: An Evolutionary Structure Learning Strategy for Spiking Neural Networks

Authors: Jiangrong Shen, Qi Xu, Jian K. Liu, Yueming Wang, Gang Pan, Huajin Tang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that the proposed ESL-SNNs framework is able to learn SNNs with sparse structures effectively while reducing the limited accuracy. The ESL-SNNs achieve merely 0.28% accuracy loss with 10% connection density on the DVS-Cifar10 dataset.
Researcher Affiliation Academia Jiangrong Shen1, Qi Xu2*, Jian K. Liu3, Yueming Wang1, Gang Pan1, Huajin Tang1,4* 1 The College of Computer Science and Technology, Zhejiang University, China 2 School of Artificial Intelligence, Dalian University of Technology, China 3 School of Computing, University of Leeds, UK 4 Research Institute of Intelligent Computing, Zhejiang Lab, China
Pseudocode Yes Algorithm 1: The training process of structure learning framework of ESL-SNNs.
Open Source Code No The paper does not contain any explicit statement about providing source code for the described methodology, nor does it include links to a code repository.
Open Datasets Yes The performance of the proposed sparse learning framework is evaluated in the MNIST, Cifar10, Cifar100, and Cifar10-DVS datasets.
Dataset Splits Yes The test accuracy is obtained by the model that is saved according to the top-1 accuracy of the validation set within 300 training epochs. During the training process, the validation set is taken 10% samples randomly from the training set.
Hardware Specification Yes The powers of FLOPS on GPU and SOPS on the neuromorphic chip are obtained from Titan V100 and True North, respectively.
Software Dependencies No The paper describes algorithms and methods (e.g., 'surrogate gradient', 'Spatio-temporal backpropagation', 'Euler method', 'TET loss function') but does not specify any software libraries or frameworks with version numbers (e.g., Python 3.8, PyTorch 1.9, or specific solvers).
Experiment Setup Yes The architecture of the multi-layer feedforward ESL-SNNs is set to be 784-800-10. During training, the learning rate exponentially decays from 0.01 to 0.0001, and the batch size is set to 100. Convolutional ESL-SNNs are built as VGGSNN (64C3-128C3-AP2-256C3-256C3-AP2-512C3-512C3-AP2-512C3-512C3-AP2) and Res Net19 on DVS-Cifar10, and two Cifar datasets, following the structure in (Deng et al. 2021b). The simulation time length is set to be 2 and 4 to speed up the training speed for Cifar10 and Cifar100. The Titer is set to be 1000 to control the updating frequency of the weight mask. The learning rate and batch size are 0.001 and 64, respectively.