Hybrid Macro/Micro Level Backpropagation for Training Deep Spiking Neural Networks
Authors: Yingyezhe Jin, Wenrui Zhang, Peng Li
NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the proposed HM2-BP algorithm by training deep fully connected and convolutional SNNs based on the static MNIST [14] and dynamic neuromorphic N-MNIST [26]. HM2-BP achieves an accuracy level of 99.49% and 98.88% for MNIST and N-MNIST, respectively, outperforming the best reported performances obtained from the existing SNN BP algorithms. |
| Researcher Affiliation | Academia | Yingyezhe Jin Texas A&M University College Station, TX 77843 jyyz@tamu.edu Wenrui Zhang Texas A&M University College Station, TX 77843 zhangwenrui@tamu.edu Peng Li Texas A&M University College Station, TX 77843 pli@tamu.edu |
| Pseudocode | No | The paper describes the algorithm using mathematical equations and prose, but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | We have made our CUDA implementation available online1, the first publicly available high-speed GPU framework for direct training of deep SNNs. 1https://github.com/jinyyy666/mm-bp-snn |
| Open Datasets | Yes | The MNIST handwritten digit dataset [14] consists of 60k samples for training and 10k for testing... The N-MNIST dataset [26] is a neuromorphic version of the MNIST dataset... The Extended MNIST-Balanced (EMNIST) [3] dataset, which includes both letters and digits, is more challenging than MNIST. EMNIST has 112,800 training and 18,800 testing samples for 47 classes... We also use the 16-speaker spoken English letters of TI46 Speech corpus [16]... There are 4,142 and 6,628 spoken English letters for training and testing, respectively. |
| Dataset Splits | Yes | The MNIST handwritten digit dataset [14] consists of 60k samples for training and 10k for testing... EMNIST has 112,800 training and 18,800 testing samples... There are 4,142 and 6,628 spoken English letters for training and testing, respectively. |
| Hardware Specification | No | The paper mentions "High Performance Research Computing (HPRC) at Texas A&M University for providing computing support" in the acknowledgments, but does not provide specific hardware details such as GPU/CPU models, processor types, or memory amounts used for running experiments. |
| Software Dependencies | No | The paper states, "We use Adam [12] as the optimizer and its parameters are set according to the original Adam paper," but it does not specify version numbers for Adam or any other software libraries, frameworks, or programming languages used for implementation. |
| Experiment Setup | Yes | The weights of the experimented SNNs are randomly initialized by using the uniform distribution U[ a, a], where a is 1 for fully connected layers and 0.5 for convolutional layers. We use fixed firing thresholds in the range of 5 to 20 depending on the layer. We adopt the exponential weight regularization scheme in [15] and introduce the lateral inhibition in the output layer to speed up training convergence [15]... We use Adam [12] as the optimizer and its parameters are set according to the original Adam paper. We impose greater sample weights for incorrectly recognized data points during the training... We train each network for 200 epochs except for ones used for EMNIST, where we use 50 training epochs. |