Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

TS-SNN: Temporal Shift Module for Spiking Neural Networks

Authors: Kairong Yu, Tianqing Zhang, Qi Xu, Gang Pan, Hongwei Wang

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the effectiveness of the proposed TS-SNN, we evaluate its performance across four datasets, and various network architectures. First, we provide details on the datasets and implementation specifics. Next, we present extensive ablation experiments to optimize the TS module. Subsequently, we compare the performance of TS-SNN with state-of-the-art methods on static image classification tasks and event-based vision tasks. Afterward, we evaluate the generality of the TS module on Transformer-based architecture. Finally, we analyze the computational efficiency of the proposed method.
Researcher Affiliation Academia 1Zhejiang University 2Dalian University of Technology. Correspondence to: Hongwei Wang <EMAIL>, Qi Xu <EMAIL>.
Pseudocode Yes Algorithm 1 Temporal Shift Module
Open Source Code No The paper does not contain an unambiguous statement that the authors are releasing the code for the work described, nor does it provide a direct link to a source-code repository.
Open Datasets Yes The proposed method was evaluated on four datasets: CIFAR-10, CIFAR-100, Image Net, and CIFAR10-DVS. CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2010) are standard benchmarks for image classification... Image Net (Deng et al., 2009) is a large-scale dataset... CIFAR10-DVS (Li et al., 2017b) is a neuromorphic dataset derived from the frame-based CIFAR-10 dataset using a dynamic vision sensor (DVS).
Dataset Splits Yes CIFAR-10 and CIFAR-100 (Krizhevsky et al., 2010) are standard benchmarks for image classification, consisting of 50,000 training images and 10,000 testing images, all sized 32 32. ... Image Net (Deng et al., 2009) ... It includes 1.2 million training images, 50,000 validation images, and 100,000 test images... CIFAR10-DVS (Li et al., 2017b) ... dataset into training and testing sets in a 9:1 ratio.
Hardware Specification Yes Experiments on CIFAR-10, CIFAR-100 and CIFAR10-DVS were conducted using an NVIDIA RTX 3090 GPU, while experiments on Image Net were performed using 8 NVIDIA RTX 4090 GPUs.
Software Dependencies No The entire codebase was implemented in Py Torch in this study. All code implementations were based on the Py Torch framework. The paper mentions PyTorch but does not specify a version number.
Experiment Setup Yes Key hyperparameters, such as the firing threshold vth, were set to 1.0. The channel folding factor Ck was set to 32, and the shift operations followed the sequence: left, right, no shift. The default value of the penalty factor α was 0.5. The optimization process utilized the SGD optimizer with a momentum of 0.9, an initial learning rate of 0.1, and a Cosine Anneal learning rate adjustment strategy. The total number of training epochs was set to 500 for CIFAR-10, CIFAR-100, and CIFAR10-DVS, and 300 for Image Net.