Rethinking the Membrane Dynamics and Optimization Objectives of Spiking Neural Networks

Authors: Hangchi Shen, Qian Zheng, Huamin Wang, Gang Pan

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our methods improve the accuracy by 4.05% on Image Net compared to baseline and achieve state-of-theart performance of 87.80% on CIFAR10-DVS and 87.86% on N-Caltech101. 5 Experiments In this section, we demonstrate the effectiveness of our proposed method by extensive experiments. We compare the results of our method with other methods on both the neuromorphic dataset and the static dataset. Additional training procedures and other hyperparameter settings are provided in the appendix A.
Researcher Affiliation Academia Hangchi Shen1,2, Qian Zheng3,4, Huamin Wang1,2, , Gang Pan3,4 1College of Artificial Intelligence, Southwest University 2Chongqing Key Laboratory of Brain Inspired Computing and Intelligent Chips 3The State Key Lab of Brain-Machine Intelligence, Zhejiang University 4College of Computer Science and Technology, Zhejiang University
Pseudocode No The paper describes methods and equations within the text but does not have a formal pseudocode or algorithm block.
Open Source Code Yes The code is available at https://github.com/Stephen Taylor1998/IMP-SNN.
Open Datasets Yes Our methods improve the accuracy by 4.05% on Image Net compared to baseline and achieve state-of-theart performance of 87.80% on CIFAR10-DVS and 87.86% on N-Caltech101. Furthermore, our LTS method improves the accuracy of SEW-Res Net50 [37] on the Image Net1k [38] dataset to 71.83%, surpassing the vanilla SEW-Res Net152, 69.26%. Table 1: Test accuracy of TET and SDT on the static and neuromorphic datasets. Loss Static Dataset(SEW-R18) Neuromorphic Dataset(VGG11) Function CIFAR10/100 Image Net100 Image Net1k CIFAR10DVS DVSG128 NCaltech101
Dataset Splits No Since the CIFAR10DVS and NCaltech101 datasets are not pre-divided into training and testing sets, we split these datasets in a 9:1 ratio.
Hardware Specification No All neurons are implemented by using spikingjelly and Py Torch, and the computations are performed on GPU. (Section 5.1) and GPUs 4 (from Table 8) / GPUs 1 (from Table 9). This mentions the type (GPU) and quantity, but not specific models (e.g., NVIDIA A100).
Software Dependencies No All neurons are implemented by using spikingjelly and Py Torch (Section 5.1). The paper mentions software components but does not provide specific version numbers for these dependencies.
Experiment Setup Yes A.8 Experimental Configurations and Hyperparameter Settings Table 8 lists the key parameters required for training on the static datasets Image Net1k, Image Net100, CIFAR10, and CIFAR100. Table 9 outlines the key parameters used for training on the neuromorphic datasets CIFAR10-DVS-128, CIFAR10-DVS-48, N-Caltech101-128, and N-Caltech101-48. (These tables then list detailed hyperparameters like architecture, time steps, learning rate, batch size, epochs, optimizers, etc.)