EnOF-SNN: Training Accurate Spiking Neural Networks via Enhancing the Output Feature
Authors: Yufei Guo, Weihang Peng, Xiaode Liu, Yuanpei Chen, Yuhan Zhang, Xin Tong, Zhou Jie, Zhe Ma
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that our method consistently outperforms the current state-of-the-art algorithms on both popular non-spiking static and neuromorphic datasets. |
| Researcher Affiliation | Collaboration | Yufei Guo , Weihang Peng , Xiaode Liu, Yuanpei Chen, Yuhan Zhang, Xin Tong, Zhou Jie, Zhe Ma Intelligent Science & Technology Academy of CASIC yfguo@pku.edu.cn, pengweihang812@163.com, mazhe_thu@163.com |
| Pseudocode | Yes | Algorithm 1 Training SNN for one epoch. |
| Open Source Code | Yes | Question: Does the paper provide open access to the data and code, with sufficient instructions to faithfully reproduce the main experimental results, as described in supplemental material? Answer: [Yes] . Justification: We provide open access to the data and code with sufficient instructions in the supplemental material. |
| Open Datasets | Yes | on both static and neuromorphic datasets including CIFAR-10 Krizhevsky et al., CIFAR-100 Krizhevsky et al., Image Net Deng et al. (2009), and CIFAR10-DVS Li (2017). |
| Dataset Splits | No | The paper specifies training and test splits for datasets (e.g., '50K training images and 10K test images' for CIFAR-10, '1,250K training and 50K test images' for ImageNet, and '9K training images and 1K test images' for CIFAR10-DVS), but it does not explicitly mention a validation set split. |
| Hardware Specification | No | The paper states in its NeurIPS checklist that 'The computation resources description is provided in the appendix.', but the provided document does not include an appendix with specific hardware details for running the experiments. The main body of the paper does not mention any specific GPU/CPU models or detailed computer specifications. |
| Software Dependencies | No | The paper mentions using the SGD optimizer and the STBP algorithm, but it does not specify any software library names (e.g., PyTorch, TensorFlow) or their version numbers, nor does it list specific CUDA versions or other software dependencies with their versions. |
| Experiment Setup | Yes | The firing threshold Vth and the membrane potential decaying τdecay were set as 0.5 and 0.25 respectively... We utilized the SGD optimizer to train our models for 400 epochs with the 0.9 momentum and a learning rate of 0.1 cosine decayed to 0. We found that λ = 0.1 can lead to a relatively better result... The data normalization, random horizontal flipping, cropping, Auto Augment Cubuk et al. (2019), and Cutout De Vries & Taylor (2017) were used for data augmentation. |