Sparse Spiking Gradient Descent

Authors: Nicolas Perez-Nieves, Dan Goodman

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show the effectiveness of our method on real datasets of varying complexity (Fashion-MNIST, Neuromophic MNIST and Spiking Heidelberg Digits) achieving a speedup in the backward pass of up to 150x, and 85% more memory efficient, without losing accuracy.
Researcher Affiliation Academia Nicolas Perez-Nieves Electrical and Electronic Engineering Imperial College London London, United Kingdom nicolas.perez14@imperial.ac.uk Dan F.M. Goodman Electrical and Electronic Engineering Imperial College London London, United Kingdom d.goodman@imperial.ac.uk
Pseudocode No No explicit pseudocode or algorithm blocks found.
Open Source Code No No explicit statement or link providing open-source code for the methodology described in this paper.
Open Datasets Yes Fashion-MNIST dataset (F-MNIST) [44], Neuromorphic MNIST (N-MNIST) [45] dataset... Spiking Heidelberg Dataset (SHD) [46])
Dataset Splits No No explicit details on train/validation/test splits are provided in the main text. It refers to Appendix E for training details, which is not available.
Hardware Specification Yes Figure 4 was obtained from running on an RTX6000 GPU. We also run this on smaller GPUs (GTX1060 and GTX1080Ti)
Software Dependencies No The paper mentions 'Pytorch CUDA extension' but does not specify version numbers for PyTorch or CUDA, nor any other software dependencies with versions.
Experiment Setup No The paper mentions a 'three-layer fully connected network' and the surrogate gradient function 'g(V ) := 1/(β|V Vth| + 1)2'. However, it states 'See Appendix E for all training details,' and Appendix E is not provided in the paper, hence complete experimental setup details like specific hyperparameters are missing from the main text.