EC-SNN: Splitting Deep Spiking Neural Networks for Edge Devices
Authors: Di Yu, Xin Du, Linshan Jiang, Wentao Tong, Shuiguang Deng
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We design extensive experiments on six datasets (i.e., four non-neuromorphic and two neuromorphic datasets) to substantiate that our approach can significantly diminish the inference execution latency on edge devices and reduce the overall energy consumption per deployed device with an average reduction of 60.7% and 27.7% respectively while keeping the effectiveness of the accuracy. |
| Researcher Affiliation | Academia | Di Yu1 , Xin Du1 , Linshan Jiang2 , Wentao Tong1 and Shuiguang Deng1 1Zhejiang University 2National University of Singapore |
| Pseudocode | Yes | Algorithm 1 Model Splitting in EC-SNN |
| Open Source Code | Yes | 1Code is available at https://github.com/Amazing DD/EC-SNN |
| Open Datasets | Yes | we choose 4 non-neuromorphic datasets (CIFAR, Caltech, GTZAN, and Urban Sound) and 2 neuromorphic datasets (CIFARDVS, NCaltech) to construct the classification task in our experiments. |
| Dataset Splits | No | The paper mentions training epochs and batch size, but does not explicitly provide training/validation/test dataset split percentages, sample counts, or a detailed splitting methodology for reproducibility. |
| Hardware Specification | Yes | Each trial is carried out on one NVIDIA Ge Force RTX 4090 GPU and 9 Raspberry Pi-4B as the edge devices for evaluating the execution time of a specific sub-model processing one sample. |
| Software Dependencies | No | The paper states, 'All models are implemented based on Pytorch and Spiking Jelly', but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We set the time step to 5 and the total number of training epochs to 70 for all networks. During the training process, we utilize Adam optimizer with a cosine-decay learning rate initiated to 1e-4 and set the batch size to 16. For LIF neurons, the time constant τ is set to 1.33. |