Advancing Training Efficiency of Deep Spiking Neural Networks through Rate-based Backpropagation
Authors: Chengting Yu, Lei Liu, Gaoang Wang, Erping Li, Aili Wang
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Comprehensive experiments on CIFAR-10, CIFAR-100, Image Net, and CIFAR10-DVS validate that our method achieves comparable performance to BPTT counterparts, and surpasses state-of-the-art efficient training techniques. |
| Researcher Affiliation | Academia | 1 College of Information Science and Electronic Engineering, Zhejiang University 2 ZJU-UIUC Institute, Zhejiang University |
| Pseudocode | Yes | The pseudocode for rate-based backpropagation, illustrating the implementations for both rate M and rate S, is provided in Algorithm 1. Algorithm 1: Single Training Iteration of the Rate-based Backpropagation |
| Open Source Code | Yes | Our code is available at https://github.com/Tab-ct/rate-based-backpropagation. |
| Open Datasets | Yes | In this section, we conduct experiments on CIFAR-10 [37], CIFAR-100 [37], Image Net [11], and CIFAR10-DVS [41] to evaluate the proposed training method. |
| Dataset Splits | Yes | CIFAR-10 includes 60,000 images across 10 classes, with 50,000 for training and 10,000 for testing, whereas CIFAR-100 is spread over 100 classes. [...] The Image Net-1K dataset [11] comprises 1,281,167 training images and 50,000 validation images distributed across 1,000 classes |
| Hardware Specification | Yes | The experiments on CIFAR-10, CIFAR-100, and CIFAR10-DVS datasets run on one NVIDIA Ge Force RTX 3090 GPU. For Image Net, distributed data parallel processing is utilized across eight NVIDIA Ge Force RTX 4090 GPUs. |
| Software Dependencies | No | We implement SNNs training on the Pytorch [53] and Spiking Jelly [19] frameworks. (No version numbers provided for PyTorch or Spiking Jelly.) |
| Experiment Setup | Yes | We set Vth = 1, λ = 0.2, and employ the sigmoid-based surrogate function [19] for LIF neurons. Detailed setups are provided in Appendix C. [...] Table 3: Training hyperparameters. CIFAR-10/CIFAR-100/Image Net/CIFAR10-DVS: Epoch 300/300/100/300, Learning rate 0.1/0.1/0.2/0.1, Batch size 128/128/512/128, Weight decay 5e-4/5e-4/2e-5/5e-4. |