Spike-Train Level Backpropagation for Training Deep Recurrent Spiking Neural Networks

Authors: Wenrui Zhang, Peng Li

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 4 Experiments and Results All reported experiments below are conducted on an NVIDIA Titan XP GPU. The experimented SNNs are based on the LIF model and weights are randomly initialized by following the uniform distribution U[ 1, 1]. Fixed firing thresholds are used in the range of 5m V to 20m V depending on the layer. Exponential weight regularization [23], lateral inhibition in the output layer [23] and Adam [20] as the optimizer are adopted. The parameters like the desired output firing counts, thresholds and learning rates are empirically tuned.
Researcher Affiliation Academia Wenrui Zhang University of California, Santa Barbara Santa Barbara, CA 93106 wenruizhang@ucsb.edu Peng Li University of California, Santa Barbara Santa Barbara, CA 93106 lip@ucsb.edu
Pseudocode Yes The complete ST-RSBP algorithm is summarized in Section 2.4 in the Supplementary Materials.
Open Source Code No By releasing the GPU implementation code, we expect this work would advance the research on spiking neural networks and neuromorphic computing.
Open Datasets Yes Based upon challenging speech and image datasets including TI46 [25], N-TIDIGITS [3], Fashion-MNIST [40] and MNIST, ST-RSBP is able to train SNNs with an accuracy surpassing that of the current state-of-the-art SNN BP algorithms and conventional non-spiking deep learning models.
Dataset Splits No For TI46-Alpha: There are 4,142 and 6,628 spoken English examples in 26 classes for training and testing, respectively. For N-Tidigits: 2,475 single digit examples are used for training and the same number of examples are used for testing. The paper specifies training and testing splits but does not mention explicit validation dataset splits.
Hardware Specification Yes All reported experiments below are conducted on an NVIDIA Titan XP GPU.
Software Dependencies No The paper mentions 'Adam [20] as the optimizer' and 'Keras BPc', but does not provide specific version numbers for these or other software libraries, programming languages, or frameworks used in the experiments.
Experiment Setup Yes All reported experiments below are conducted on an NVIDIA Titan XP GPU. The experimented SNNs are based on the LIF model and weights are randomly initialized by following the uniform distribution U[ 1, 1]. Fixed firing thresholds are used in the range of 5m V to 20m V depending on the layer. Exponential weight regularization [23], lateral inhibition in the output layer [23] and Adam [20] as the optimizer are adopted. The parameters like the desired output firing counts, thresholds and learning rates are empirically tuned. Table 1 lists the typical constant values adopted in the proposed ST-RSBP learning rule in our experiments. The simulation step size is set to 1 ms. The batch size is 1 which means ST-RSBP is applied after each training sample to update the weights.