Direct Training for Spiking Neural Networks: Faster, Larger, Better

Authors: Yujie Wu, Lei Deng, Guoqi Li, Jun Zhu, Yuan Xie, Luping Shi1311-1318

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the proposed model and learning algorithm on both neuromorphic datasets (N-MNIST and DVS-CIFAR10) and non-spiking datasets (CIFAR10) from two aspects: (1) training acceleration; (2) application accuracy. As a result, we achieve significantly better accuracy than the reported works on neuromorphic datasets (N-MNIST and DVSCIFAR10), and comparable accuracy as existing ANNs and pre-trained SNNs on non-spiking datasets (CIFAR10).
Researcher Affiliation Academia Yujie Wu,1 Lei Deng,2 Guoqi Li,1 Jun Zhu,3 Yuan Xie,2 Luping Shi1 1Center for Brain-Inspired Computing Research, Department of Precision Instrument, Tsinghua University 2Department of Electrical and Computer Engineering, University of California, Santa Barbara 3Department of Computer Science and Technology, Institute for AI, THBI Lab, Tsinghua University Corresponding: lpshi@tsinghua.edu.cn; dcszj@mail.tsinghua.edu.cn
Pseudocode Yes Algorithm 1 State update for an explicitly iterative LIF neuron at time step t + 1 in the (n + 1)-th layer. Algorithm 2 Training codes for one iteration.
Open Source Code No The paper does not provide any concrete access information (e.g., specific repository link, explicit code release statement, or code in supplementary materials) for the source code of the described methodology.
Open Datasets Yes We test the proposed model and learning algorithm on both neuromorphic datasets (N-MNIST and DVS-CIFAR10) and non-spiking datasets (CIFAR10) from two aspects: (1) training acceleration; (2) application accuracy. The dataset introduction, pre-processing, training detail, and parameter configuration are summarized in Appendix.
Dataset Splits No The paper mentions using training and testing for different datasets (N-MNIST, DVS-CIFAR10, CIFAR10) but does not provide specific details on the training/validation/test splits, such as exact percentages or sample counts.
Hardware Specification No The paper mentions using PyTorch for implementation but does not specify any hardware details (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using Pytorch for implementation and compares with Matlab, but it does not specify version numbers for these or any other software dependencies.
Experiment Setup Yes For fairness, we made several configuration restrictions, such as software version, parameter setting, etc. More details can be found in Appendix. Fig. 3 shows the comparisons about average runtime per epoch, where batch size of 20 is used for simulation.