NDOT: Neuronal Dynamics-based Online Training for Spiking Neural Networks

Authors: Haiyan Jiang, Giulia De Masi, Huan Xiong, Bin Gu

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on CIFAR-10, CIFAR-100, and CIFAR10-DVS demonstrate the superior performance of our NDOT method on large-scale static and neuromorphic datasets within a small number of time steps.
Researcher Affiliation Collaboration 1Department of Machine Learning, MBZUAI, Abu Dhabi, UAE 2Technology Innovation Institute, Abu Dhabi, UAE 3Sant Anna School of Advanced Studies, Italy 4Harbin Institute of Technology, China 5School of Artificial Intelligence, Jilin University, China. Correspondence to: Huan Xiong <Huan.Xiong@mbzuai.ac.ae>, Bin Gu <Bin.Gu@mbzuai.ac.ae>.
Pseudocode Yes Algorithm 1 One iteration of NDOT for training SNNs
Open Source Code Yes The codes are available at https://github. com/Haiyan Jiang/SNN-NDOT.
Open Datasets Yes In this section, we conduct extensive experiments on CIFAR10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), and CIFAR10-DVS (Li et al., 2017) to demonstrate the superior performance of our proposed NDOT method on large-scale static and neuromorphic datasets.
Dataset Splits No No specific training/validation/test split percentages or sample counts were explicitly stated in the main text for CIFAR-10, CIFAR-100. For DVS-CIFAR10: "Following standard procedures, we partition the dataset into 9000 training samples and 1000 testing samples." This specifies train/test but not validation.
Hardware Specification No No specific hardware details (e.g., GPU models, CPU types, memory amounts) are provided. It only mentions 'GPU' in Figure 2.
Software Dependencies No The paper mentions PyTorch's auto-grad functionality but does not specify its version or any other software dependencies with version numbers.
Experiment Setup Yes We use the SGD optimizer with no weight decay. The initial learning rate is 0.1 and will cosine decay to 0 during the training for all experiments. For the hyperparameters of LIF neuron models, we set Vth = 1, λ = 0.5.