CLIF: Complementary Leaky Integrate-and-Fire Neuron for Spiking Neural Networks
Authors: Yulong Huang, Xiaopeng Lin, Hongwei Ren, Haotian Fu, Yue Zhou, Zunchang Liu, Biao Pan, Bojun Cheng
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on a variety of datasets demonstrate CLIF s clear performance advantage over other neuron models. |
| Researcher Affiliation | Academia | 1Function Hub, The Hong Kong University of Science and Technology (Guangzhou), Guangzhou, China 2School of Integrated Circuit Science and Engineering, Beihang University, Beijing, China. |
| Pseudocode | Yes | Algorithm 1 Core function for CLIF model |
| Open Source Code | Yes | The code is available at https://github.com/Huu Yu Long/Complementary LIF. |
| Open Datasets | Yes | CIFAR-10/100 The CIFAR-10 and CIFAR-100 datasets (Krizhevsky et al., 2009)... Tiny-Image Net Tiny-Image Net contains 200 categories... DVSCIFAR10 The DVS-CIFAR10 dataset (Li et al., 2017)... DVSGesture The DVS128 Gesture dataset (Amir et al., 2017)... |
| Dataset Splits | No | For the CIFAR-10 and CIFAR-100 datasets... each dataset comprising 50,000 training samples and 10,000 testing samples. The paper does not explicitly mention a validation set split or provide specific percentages/counts for train/validation/test splits, nor does it refer to a standard split that includes validation. |
| Hardware Specification | No | No specific hardware details (such as GPU/CPU models or cloud instance types) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The event-to-frame integration is handled with the Spiking Jelly (Fang et al., 2023) framework. No specific version numbers for software dependencies (e.g., Python, PyTorch, Spiking Jelly) are provided. |
| Experiment Setup | Yes | Unless otherwise specified or for the purpose of comparative experiments, the experiments in this paper adhere to the following settings and data preprocessing: all our self-implementations use Rectangle surrogate functions with α = Vth = 1, and the decay constant τ is set to 2.0. All random seed settings are 2022. For all loss functions, we use the TET (Deng et al., 2021) with a 0.05 loss lambda, as implemented in (Meng et al., 2023). Table 5. Training Parameters Dataset Optimizer Weight Dacay Batch Size Epoch Learning Rate CIFAR10 SGD 5e-5 128 200 0.1 CIFAR100 SGD 5e-4 128 200 0.1 Tiny Image Net SGD 5e-4 256 300 0.1 DVSCIFAR10 SGD 5e-4 128 300 0.05 DVSGesture SGD 5e-4 16 300 0.1 |