Online Training Through Time for Spiking Neural Networks
Authors: Mingqing Xiao, Qingyan Meng, Zongpeng Zhang, Di He, Zhouchen Lin
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on CIFAR-10, CIFAR-100, Image Net, and CIFAR10-DVS demonstrate the superior performance of our method on large-scale static and neuromorphic datasets in a small number of time steps. Our code is available at https://github.com/pkuxmq/OTTT-SNN. |
| Researcher Affiliation | Academia | 1Key Lab. of Machine Perception (Mo E), School of Intelligence Science and Technology, Peking University 2The Chinese University of Hong Kong, Shenzhen 3Shenzhen Research Institute of Big Data 4Center for Data Science, Academy for Advanced Interdisciplinary Studies, Peking University 5Institute for Artificial Intelligence, Peking University 6Peng Cheng Laboratory, China |
| Pseudocode | Yes | Pseudo-codes are in Appendix B. |
| Open Source Code | Yes | Our code is available at https://github.com/pkuxmq/OTTT-SNN. |
| Open Datasets | Yes | In this section, we conduct extensive experiments on CIFAR-10 [58], CIFAR100 [58], Image Net [59], CIFAR10-DVS [60], and DVS128-Gesture [61] to demonstrate the superior performance of our proposed method on large-scale static and neuromorphic datasets. |
| Dataset Splits | No | The paper discusses batch sizes and epochs, and refers to Appendix C for training details which might include splits, but it does not explicitly provide the training/validation/test dataset splits with percentages or sample counts in the main text. |
| Hardware Specification | No | The paper mentions "GPU training" and "GPU" usage. However, it does not specify any particular GPU models, CPU models, or other detailed hardware specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python version, PyTorch version, etc.). |
| Experiment Setup | Yes | For all our SNN models, we set Vth = 1 and λ = 0.5. Please refer to Appendix C for training details. ... We verify this by training the VGG network on CIFAR-10 with batch size 128 under different time steps and calculating the memory costs on the GPU. ... Models are only trained for 20 epochs due to the relatively long training time with batch size 1. |