TIM: An Efficient Temporal Interaction Module for Spiking Transformer

Authors: Sicheng Shen, Dongcheng Zhao, Guobin Shen, Yi Zeng

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To validate the performance of our algorithm, we conducted comprehensive tests on multiple neuromorphic datasets, including DVS-CIFAR10, N-CALTECH101, NCARS, UCF101-DVS, and HMDB51-DVS.
Researcher Affiliation Academia Sicheng Shen1,2,4 , Dongcheng Zhao1,2 , Guobin Shen1,2,4 and Yi Zeng1,2,3,4,5 1 Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences 2 Center for Long-term Artificial Intelligence 3 Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, CAS 4 School of Future Technology, University of Chinese Academy of Sciences 5 School of Artificial Intelligence, University of Chinese Academy of Sciences
Pseudocode No The paper describes its methods and components but does not include structured pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes The code is available at https://github.com/Brain Cog X/Brain-Cog/tree/main/examples/TIM.
Open Datasets Yes To validate the performance of our algorithm, we conducted comprehensive tests on multiple neuromorphic datasets, including DVS-CIFAR10, N-CALTECH101, NCARS, UCF101-DVS, and HMDB51-DVS. Furthermore... CIFAR10-DVS is an event stream dataset comprising 10,000 images from the CIFAR-10 dataset. ... The N-CALTECH101 dataset, as introduced in [Orchard et al., 2015]... The NCARS dataset, as introduced in [Orchard et al., 2015]... The UCF101-DVS and HMDB51-DVS datasets represent neuromorphic adaptations of the well-known UCF101 and HMDB51 datasets, respectively. ... The Spiking Heidelberg Digits (SHD) dataset...
Dataset Splits No The paper describes using various neuromorphic datasets for testing but does not explicitly provide details about train, validation, and test splits (e.g., percentages or sample counts) for reproducibility, nor does it explicitly mention a validation set.
Hardware Specification No The paper states 'All experiments were completed on the Brain Cog [Zeng et al., 2023] platform,' which is a software platform, but it does not specify any hardware details such as GPU models, CPU types, or memory.
Software Dependencies No The paper mentions that 'All experiments were completed on the Brain Cog [Zeng et al., 2023] platform,' but it does not provide specific version numbers for this platform or any other software dependencies like libraries or programming languages.
Experiment Setup Yes In the experiments, we set the batchsize to 16 and used the Adam W optimizer. The total number of training epochs was set to 500. The initial learning rate was set to 0.005, adjusted with a cosine decay strategy. The time constant (τ value) of the LIF Node was set to 2, and its firing threshold was set to 1. The simulation step length of the SNN was set to 10. The default α of the TIM Stream was set to 0.5.