Unifying Activation- and Timing-based Learning Rules for Spiking Neural Networks

Authors: Jinseok Kim, Kyungsu Kim, Jae-Joon Kim

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results showed that the proposed method achieves higher performance in terms of both accuracy and efficiency than the previous approaches. In experiments with random spike-train matching task and widely used benchmarks (MNIST and N-MNIST), our method achieved the higher accuracy than that of existing methods when the networks are forced to use fewer spikes in training.
Researcher Affiliation Academia Jinseok Kim1 Kyungsu Kim1 Jae-Joon Kim1,2 1Department of Creative IT Engineering, 2Graduate School of Artificial Intelligence Pohang University of Science and Technology (POSTECH), Korea {jinseok.kim, kyungsu.kim, jaejoon}@postech.ac.kr
Pseudocode No The paper contains mathematical formulations and diagrams but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes 1The source code is available at https://github.com/Kyungsu Kim42/ANTLR.
Open Datasets Yes latency-coded MNIST [26] and spiking version of MNIST, called N-MNIST [27]. The references [26] and [27] provide proper citations: '[26] Y. Le Cun, C. Cortes, and C. J. Burges, The mnist database of handwritten digits, 1998, URL http://yann. lecun. com/exdb/mnist, vol. 10, p. 34, 1998.' and '[27] G. Orchard, A. Jayawant, G. K. Cohen, and N. Thakor, Converting static image datasets to spiking neuromorphic datasets using saccades, Frontiers in neuroscience, vol. 9, p. 437, 2015.'
Dataset Splits Yes We trained the network with a size of 784-800-10 and 100 time steps using a mini-batch size of 16 and the split of 50000/10000 images for training/validation dataset.
Hardware Specification No The paper mentions 'CUDA-compatible gradient computation functions' but does not specify any particular hardware details such as GPU models, CPU types, or memory used for experiments.
Software Dependencies No The paper states 'implemented CUDA-compatible gradient computation functions in PyTorch [23]', but it does not specify explicit version numbers for PyTorch, CUDA, or any other software dependencies.
Experiment Setup Yes We trained the network with a size of 784-800-10 and 100 time steps using a mini-batch size of 16 and We trained the network with a size of 2x34x34-800-10 using a mini-batch size of 16.