Efficient and Effective Time-Series Forecasting with Spiking Neural Networks
Authors: Changze Lv, Yansen Wang, Dongqi Han, Xiaoqing Zheng, Xuanjing Huang, Dongsheng Li
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conduct experiments to investigate the following research questions: |
| Researcher Affiliation | Collaboration | The work was conducted during the internship of Changze Lv (czlv22@m.fudan.edu.cn) at Microsoft Research Asia. 1School of Computer Science, Fudan University, Shanghai, China 2Microsoft Research Asia, Shanghai, China. |
| Pseudocode | No | The paper describes the methodology in prose and mathematical equations but does not include structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code is available at https: //github.com/microsoft/Seq SNN. |
| Open Datasets | Yes | Metr-la (Li et al., 2017b); Pems-bay (Li et al., 2017b); Electricity (Lai et al., 2018); Solar (Lai et al., 2018) |
| Dataset Splits | Yes | We partitioned the forecasting datasets into train, validation, and test sets following a chronological order. The statistical characteristics and specific split details can be found in Table 4. |
| Hardware Specification | Yes | We run our experiments on 4 NVIDIA RTX A6000 GPUs. |
| Software Dependencies | No | The paper mentions ‘Snn Torch’ and ‘Spiking Jelly’ as Pytorch-based frameworks, and ‘Adam’ optimizer, but does not provide specific version numbers for any of these software dependencies. |
| Experiment Setup | Yes | To construct our proposed SNNs, we use two Pytorch-based frameworks: Snn Torch (Eshraghian et al., 2021) and Spiking Jelly (Fang et al., 2020b). For all SNNs, we set the time step Ts = 4. For all LIF neurons in SNNs, we set threshold Uthr = 1.0, decay rate β = 0.99, α = 2 in surrogate gradient function. ... we set the batch size as 128 and adopt Adam (Kingma & Ba, 2014) optimizer with a learning rate of 1 10 4. We adopt an early stopping strategy with 30 epochs tolerance. |