Efficient and Accurate Conversion of Spiking Neural Network with Burst Spikes
Authors: Yang Li, Yi Zeng
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on CIFAR and Image Net demonstrate that our algorithm is efficient and accurate. For example, our method can ensure nearly lossless conversion of SNN and only use about 1/10 (less than 100) simulation time under 0.693 energy consumption of the typical method. Our code is available at https://github.com/Brain Inspired-Cognitive-Engine/Conversion Burst. |
| Researcher Affiliation | Academia | Yang Li1,2 and Yi Zeng1,2,3,4 1Research Center for Brain-inspired Intelligence, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences 4National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences {liyang2019, yi.zeng}@ia.ac.cn |
| Pseudocode | Yes | Algorithm 1 Transmit with Burst Spikes |
| Open Source Code | Yes | Our code is available at https://github.com/Brain Inspired-Cognitive-Engine/Conversion Burst. |
| Open Datasets | Yes | We conduct experiments on CIFAR and Image Net. For CIFAR, we use VGG16 and Res Net20 models, and for Image Net, we use VGG16 model. |
| Dataset Splits | No | The paper mentions using 'validation set' and details training parameters, but it does not explicitly provide specific percentages, sample counts, or formal citations for the training, validation, or test dataset splits. It relies on the reader's knowledge of standard splits for CIFAR and ImageNet. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU model, CPU type) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'pytorchcv' but does not specify a version number for it or any other software components (e.g., Python, PyTorch) that were used to run the experiments. |
| Experiment Setup | Yes | For CIFAR, we use stochastic gradient descent with 0.9 momentum for weight optimization. The cosine learning rate decay strategy with an initial value of 0.1 is used to change the learning rate dynamically. The network is optimized for 300 iterations with a batch size of 128. We use data augmentations for high performance. |