Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Towards More Discriminative Feature Learning in SNNs with Temporal-Self-Erasing Supervision

Authors: Wei Liu, Li Yang, Mingxuan Zhao, Dengfeng Xue, Shuxun Wang, Boyu Cai, Jin Gao, Wenjuan Li, Bing Li, Weiming Hu

AAAI 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results on benchmark datasets demonstrate that our TSE method significantly improves the classification accuracy and robustness of SNNs. Experimental results demonstrate the efficacy of our method across various benchmarks. Experiments Datesets In this section, we conduct extensive experiments to validate the efficacy of our proposed method. We employ Spiking Res Net-18/19 as our backbone and experiment on datasets such as CIFAR-100 (Krizhevsky, Nair, and Hinton 2010), Image Net (Deng et al. 2009), and the neuromorphic DVSCIFAR10 (Li et al. 2017).
Researcher Affiliation Academia 1State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences 2School of Artificial Intelligence, University of Chinese Academy of Sciences 3School of Information Science and Technology, Shanghai Tech University
Pseudocode No The paper describes its methodology using textual explanations and mathematical equations, such as equations (1) to (12), but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper does not contain any explicit statements about making its source code available, nor does it provide a link to a code repository.
Open Datasets Yes We employ Spiking Res Net-18/19 as our backbone and experiment on datasets such as CIFAR-100 (Krizhevsky, Nair, and Hinton 2010), Image Net (Deng et al. 2009), and the neuromorphic DVSCIFAR10 (Li et al. 2017).
Dataset Splits Yes CIFAR-100 has 100 classes containing 600 images for each category. 500 images for training and 100 images for testing. Image Net has over 1.2 million training images and 50 thousand validing images.
Hardware Specification Yes For the CIFAR-100 and DVS-CIFAR10 datasets, we conduct both training and inference processes utilizing a single NVIDIA A100 GPU. For the Image Net dataset, we employ four NVIDIA A100 GPUs for training.
Software Dependencies No The paper mentions using
Experiment Setup Yes The threshold voltage and the membrane potential decay constant are set to 1 and 2, respectively. Our SNN model is trained using the Stochastic Gradient Descent (SGD) optimizer, with a momentum of 0.9, and an initial learning rate of 0.1, decreasing to 0 with cosine learning rate scheduler. The batch size and epoch numbers are set to 32 and 320, respectively.