Training Spiking Neural Networks with Accumulated Spiking Flow
Authors: Hao Wu, Yueyi Zhang, Wenming Weng, Yongting Zhang, Zhiwei Xiong, Zheng-Jun Zha, Xiaoyan Sun, Feng Wu10320-10328
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results demonstrate that with our proposed ASF-BP method, light-weight convolutional SNNs achieve superior performances compared with other spike-based BP methods on both non-neuromorphic (MNIST, CIFAR10) and neuromorphic (CIFAR10-DVS) datasets. ... Experimental results demonstrate that SNNs with ASF-BP achieve state-of-the-art performances on the three test datesets. |
| Researcher Affiliation | Academia | Hao Wu, Yueyi Zhang, Wenming Weng, Yongting Zhang Zhiwei Xiong, Zheng-Jun Zha, Xiaoyan Sun, Feng Wu University of Science and Technology of China National Engineering Laboratory for Brain-inspired Intelligence Technology and Application {wuhao,wmweng,zytabcd}@mail.ustc.edu.cn, {zhyuey,zwxiong,zhazj,sunxiaoyan,fengwu}@ustc.edu.cn |
| Pseudocode | Yes | Finally, we show the details of the ASF-BP with the pseudo-code in Algorithm 1. |
| Open Source Code | Yes | The code is available at https://github.com/neural-lab/ASF-BP. |
| Open Datasets | Yes | Two kinds of image datasets are utilized to test the ASFBP method for image classification tasks. They are nonneuromorphic datasets MNIST (Le Cun et al. 1998), CIFAR10 (Alex Krizhevsky 2009) and neuromorphic dataset CIFAR10-DVS (Li et al. 2017). |
| Dataset Splits | No | MNIST is a handwritten digital dataset, which is a standard benchmark used to evaluate the performance of pattern recognition and machine learning algorithms. The dataset contains 70,000 grayscale images in 10 classes, of which 60,000 images are used for training and 10,000 images are for testing. ... CIFAR10 dataset is widely used in machine learning research for object classification. It contains 60,000 color images in 10 classes, of which 50,000 images are for training and 10,000 images are for testing. ... We randomly select 90% of the images as the training images and the rest images are the testing images for every class. |
| Hardware Specification | Yes | All the models are trained using one NVIDIA Titan XP Graphics card. |
| Software Dependencies | No | The simulation code is written with the Pytorch framework (Paszke et al. 2017) , which provides easy interfaces for GPU acceleration and auto differentiation. |
| Experiment Setup | Yes | The threshold of neurons is tuned according to different types of networks and datasets, which is typically set between 0.5 and 2. We adopt the Adam optimizer (Kingma and Ba 2014) to adjust the learning rate with initial lr = 8.5 * 10^-4 or 5 * 10^-4 for different datasets. After dozens of epochs, we will decrease the learning rate manually. The batch size is set to 60, 100, 40 for the MNIST, CIFAR10 and CIFAR10-DVS datasets respectively. The scale factor used in backward process is updated every 5 epochs. |