Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Ternary Spike: Learning Ternary Spikes for Spiking Neural Networks
Authors: Yufei Guo, Yuanpei Chen, Xiaode Liu, Weihang Peng, Yuhan Zhang, Xuhui Huang, Zhe Ma
AAAI 2024 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In the paper, we theoretically and experimentally prove that the binary spike activation map cannot carry enough information, thus causing information loss and resulting in accuracy decreasing. Extensive experiments with several popular network structures over static and dynamic datasets show that the ternary spike can consistently outperform state-of-the-art methods. |
| Researcher Affiliation | Collaboration | Intelligent Science & Technology Academy of CASIC, China EMAIL, EMAIL, EMAIL |
| Pseudocode | No | The paper describes the LIF neuron model and re-parameterization technique using mathematical equations (e.g., equations 1-5, 7-8, 13-17), but it does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is open-sourced at https://github.com/yfguo91/Ternary-Spike. |
| Open Datasets | Yes | We evaluate our methods on both static (CIFAR10 (Krizhevsky, Nair, and Hinton 2010), CIFAR100 (Krizhevsky, Nair, and Hinton 2010), Image Net (Deng et al. 2009)) and spiking (CIFAR10-DVS (Li et al. 2017)) datasets with widely used backbones. |
| Dataset Splits | No | The paper states the use of standard datasets like CIFAR-10, CIFAR-100, ImageNet, and CIFAR10-DVS but does not explicitly provide the specific training, validation, or test split percentages or sample counts used for these datasets within its text. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running experiments, such as particular GPU models, CPU specifications, or cloud computing instances. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions) that would be needed to replicate the experimental environment. |
| Experiment Setup | Yes | We train a spiking Res Net20 (He et al. 2016) with 1&2 timesteps on the CIFAR-10 (Krizhevsky, Nair, and Hinton 2010) dataset and show the different layers potential membrane distributions in Fig. 2. Our method reach 70.74% top-1 accuracy on the Image Net using Res Net34 with only 4 timesteps. |