Spike Count Maximization for Neuromorphic Vision Recognition

Authors: Jianxiong Tang, Jian-Huang Lai, Xiaohua Xie, Lingxiao Yang

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiment results demonstrate that the SCM performs satisfactorily in most cases. Using the output spikes for prediction, the accuracies of SCM are 2.12% 16.50% higher than the popular training losses on the CIFAR10-DVS dataset. This section estimates the SCM on the neuromorphic datasets (DVS128-GESTURE, CIFAR10-DVS, and ASL-DVS).
Researcher Affiliation Academia Jianxiong Tang1 , Jian-Huang Lai1,2,3 , Xiaohua Xie1,2,3 and Lingxiao Yang1,2,3 1School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou, China 2Guangdong Province Key Laboratory of Information Security Technology, Guangzhou, China 3Key Laboratory of Machine Intelligence and Advanced Computing, Ministry of Education, China
Pseudocode Yes Algorithm 1 Spike Count Maximization
Open Source Code Yes The code is available at https://github.com/TJXTT/SCM-SNN.
Open Datasets Yes This section estimates the SCM on the neuromorphic datasets (DVS128-GESTURE, CIFAR10-DVS, and ASL-DVS). We compare our SCM with the SNNs trained based on the popular loss functions and spike-based BP algorithms. Details of the setting and results are presented in the following sections. Tab.1 shows the neuromorphic datasets: DVS128-GESTURE (DVS-G) [Amir et al., 2017], CIFAR10-DVS (C10-DVS) [Li et al., 2017], and ASL-DVS [Bi et al., 2019] the DVS captures.
Dataset Splits Yes For C10-DVS, we randomly separate 90% of the samples for training and 10% for testing. For ASL-DVS, we randomly separate 80% of the samples for training and 20% for testing.
Hardware Specification No The paper does not provide specific details on the hardware used for running the experiments, such as GPU/CPU models, memory, or cloud instance types.
Software Dependencies No The paper mentions 'Adam optimizer' and specific neuron models but does not provide specific version numbers for any software dependencies like programming languages, libraries, or frameworks (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes We adopt the Adam optimizer with a learning rate of 0.001 to train all models in Stage 1, and the training epochs of DVS-G, C10-DVS, and ASL-DVS are 30, 30, and 3. For Stage 2 training, we set the iterations to 10, β = 0.01, ρ = 1, and γ ranges from 0.001 to 1000 with a step size of 10.