Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Temporal Effective Batch Normalization in Spiking Neural Networks
Authors: Chaoteng Duan, Jianhao Ding, Shiyan Chen, Zhaofei Yu, Tiejun Huang
NeurIPS 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results on both static and neuromorphic datasets show that SNNs with TEBN outperform the state-of-the-art accuracy with fewer time-steps, and achieve better robustness to hyper-parameters than other normalizations. |
| Researcher Affiliation | Academia | Chaoteng Duan School of Electronic and Computer Engineering Peking University Beijing, China 100871 EMAIL Jianhao Ding School of Computer Science Peking University Beijing, China 100871 EMAIL Shiyan Chen School of Electronic and Computer Engineering Peking University Beijing, China 100871 EMAIL Zhaofei Yu Institute for Artificial Intelligence School of Computer Science Peking University Beijing, China 100871 EMAIL Tiejun Huang School of Computer Science Peking University Beijing, China 100871 EMAIL |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code will be available in the supplementary. |
| Open Datasets | Yes | We use CIFAR10/100[28], and CIFAR10-DVS [30]. |
| Dataset Splits | No | The paper mentions that training details are in the supplementary, but does not explicitly provide specific dataset split information (e.g., percentages or sample counts for training, validation, or test sets) in the main text. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments within the provided text. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers. |
| Experiment Setup | No | More details of the configurations are provided in the supplementary. |