Memory-Efficient Reversible Spiking Neural Networks
Authors: Hong Zhang, Yu Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through experiments on static and neuromorphic datasets, we demonstrate that the memory cost per image of our reversible SNNs does not increase with the network depth. On CIFAR10 and CIFAR100 datasets, our Rev SRes Net37 and Rev SFormer-4-384 achieve comparable accuracies and consume 3.79 and 3.00 lower GPU memory per image than their counterparts with roughly identical model complexity and parameters. |
| Researcher Affiliation | Academia | Hong Zhang1, Yu Zhang1,2* 1State Key Laboratory of Industrial Control Technology, College of Control Science and Engineering, Zhejiang University, Hangzhou, China 2Key Laboratory of Collaborative Sensing and Autonomous Unmanned Systems of Zhejiang Province, Hangzhou, China {hongzhang99, zhangyu80}@zju.edu.cn |
| Pseudocode | No | The paper describes methods using text and diagrams (Figure 2, 3, 4) but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | No | No explicit statement about code release or link to a code repository for the methodology described in this paper. |
| Open Datasets | Yes | We verify the effect of Rev SRes Net and Rev SFormer on static datasets (CIFAR10 and CIFAR100 (Krizhevsky, Hinton et al. 2009)) and neuromorphic datasets (CIFAR10-DVS (Li et al. 2017) and DVS128 Gesture (Amir et al. 2017)). |
| Dataset Splits | No | CIFAR10 and CIFAR100 each provides 50000 train and 10000 test images. |
| Hardware Specification | Yes | The values are measured on a single 24GB RTX3090 GPU under CIFAR10 dataset. |
| Software Dependencies | No | No specific version numbers for key software components (e.g., Python, PyTorch, or other libraries) used in the implementation are provided. |
| Experiment Setup | Yes | The training time and maximum batch size of our reversible SNNs and their non-reversible counterparts are shown in Table 4. ... Maximum Batch size MS Res Net34 239 ... Rev SRes Net37 644 ... Spikingformer-4-384 164 ... Rev SFormer-4-384 286 |