Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Online Stabilization of Spiking Neural Networks
Authors: Yaoyu Zhu, Jianhao Ding, Tiejun Huang, Xiaodong Xie, Zhaofei Yu
ICLR 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments conducted on various datasets demonstrate the proposed method s superior performance among SNN online training algorithms. Our code is available at https://github.com/zhuyaoyu/SNN-onlinenormalization. |
| Researcher Affiliation | Academia | Yaoyu Zhu1, Jianhao Ding1, Tiejun Huang1,2, Xiaodong Xie1 & Zhaofei Yu1,2 1 School of Computer Science, Peking University 2 Institute for Artificial Intelligence, Peking University |
| Pseudocode | Yes | The overall algorithm description is provided in Appendix B. |
| Open Source Code | Yes | Our code is available at https://github.com/zhuyaoyu/SNN-onlinenormalization. |
| Open Datasets | Yes | We conduct experiments on CIFAR10, CIFAR100 (Krizhevsky et al., 2009), DVS-Gesture (Amir et al., 2017), CIFAR10-DVS (Li et al., 2017), and Imagenet (Deng et al., 2009) datasets to evaluate the performance of our method. |
| Dataset Splits | No | The paper describes data augmentation for training and mentions the test sets for evaluation, but does not explicitly provide details about specific validation dataset splits (percentages, counts, or explicit standard splits for a validation set). |
| Hardware Specification | Yes | All experiments are run on Nvidia RTX 4090 GPUs with Pytorch 2.0. |
| Software Dependencies | Yes | All experiments are run on Nvidia RTX 4090 GPUs with Pytorch 2.0. |
| Experiment Setup | Yes | Other hyperparameters we use are provided in Table 3, including total training epochs, batch size, learning rate, weight decay, ϵ (weight of MSE loss in Eq. 5), and dropout rate. |