SLAYER: Spike Layer Error Reassignment in Time
Authors: Sumit Bam Shrestha, Garrick Orchard
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate SLAYER achieving state of the art accuracy for an SNN on neuromorphic datasets (Section 4) for visual digit recognition, action recognition, and spoken digit recognition. In this Section we will present different experiments conducted and results on them to evaluate the performance of SLAYER. First, we train an SNN to produce a fixed Poisson spike train pattern in response to a given set of Poisson spike inputs. We use this simple example to show how SLAYER works. Afterwards we present results of classification tasks performed on both spiking datasets and non-spiking datasets converted to spikes. |
| Researcher Affiliation | Academia | Sumit Bam Shrestha Temasek Laboratories @ NUS National University of Singapore Singapore, 117411 tslsbs@nus.edu.sg Garrick Orchard Temasek Laboratories @ NUS National University of Singapore Singapore, 117411 tslgmo@nus.edu.sg |
| Pseudocode | No | The paper describes mathematical formulations and a "pipeline" for SLAYER, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | We have developed and released3 a CUDA accelerated framework to train SNNs using SLAYER. 3 The code for SLAYER learning framework is publicly available at: https://bitbucket.org/bamsumit/slayer |
| Open Datasets | Yes | MNIST is a popular machine learning dataset. ... The NMNIST dataset [36] consists of MNIST images converted into a spiking dataset... The DVS Gesture [32] dataset consists of recordings... TIDIGITS [37] is an audio classification dataset... |
| Dataset Splits | No | Standard split of 60,000 training samples and 10,000 testing samples was used with no data augmentation. The training and testing separation is the same as the standard MNIST split of 60,000 training samples and 10,000 testing samples. Samples from the first 23 subjects were used for training and last 6 subjects were used for testing. The dataset was split into 3950 training samples and 1000 testing samples. No explicit mention of a validation split. |
| Hardware Specification | No | We use our CUDA accelerated SNN deep learning framework for SLAYER to perform all the simulations for which results are presented in this paper. No specific GPU model or other hardware specifications are provided. |
| Software Dependencies | No | The paper mentions a 'CUDA accelerated SNN deep learning framework' but does not specify any software names with version numbers for reproducibility. |
| Experiment Setup | Yes | During training, we specify a target of 20 spikes for the true neuron and 5 spikes for each false neuron over the 25 ms period. In our experiments, we use spike response kernels of the form ε(t) = t/τs exp(1 t/τs)Θ(t) and ν(t) = 2ϑ exp(1 t/τr)Θ(t). The learning finally converges to the desired spike train at the 739th epoch. For NMNIST training, we use a target of 10 spikes for each false class neuron and 60 spikes for the true class neuron. For training we set a target spike count of 30 for false class neurons and 180 for the true class neuron. For training, we specify a target of 5 spikes for false classes and 20 spikes for the true class. |