TC-LIF: A Two-Compartment Spiking Neuron Model for Long-Term Sequential Modelling
Authors: Shimin Zhang, Qu Yang, Chenxiang Ma, Jibin Wu, Haizhou Li, Kay Chen Tan
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results, on a diverse range of temporal classification tasks, demonstrate superior temporal classification capability, rapid training convergence, and high energy efficiency of the proposed TC-LIF model. |
| Researcher Affiliation | Academia | 1The Hong Kong Polytechnic University, Hong Kong SAR, China 2National University of Singapore, Singapore 3The Chinese University of Hong Kong, Shenzhen (CUHK-Shenzhen), Guangdong, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is publicly available at https://github.com/Zhang Shimin1/TC-LIF. |
| Open Datasets | Yes | Then, we evaluate the TC-LIF model on various temporal classification benchmarks, including sequential MNIST (S-MNIST), permuted sequential MNIST (PSMNIST), Google Speech Commands (GSC), Spiking Heidelberg Digits (SHD), and Spiking Google Speech Commands (SSC). |
| Dataset Splits | No | The paper refers to using datasets like S-MNIST and PS-MNIST but does not explicitly provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) in the main text. It mentions 'More details about our experimental setups and training details are provided in Supplementary Materials'. |
| Hardware Specification | No | The paper mentions calculations based on a '45nm CMOS process' but does not provide specific hardware details (like GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments. It states, 'More details about our experimental setups and training details are provided in Supplementary Materials,' which might contain hardware specifics, but they are not in the main paper. |
| Software Dependencies | No | The paper does not provide specific ancillary software details, such as library or solver names with version numbers, needed to replicate the experiment. |
| Experiment Setup | No | The paper states, 'More details about our experimental setups and training details are provided in Supplementary Materials,' but does not provide specific experimental setup details (concrete hyperparameter values, training configurations, or system-level settings) in the main text. |