AutoSNN: Towards Energy-Efficient Spiking Neural Networks

Authors: Byunggook Na, Jisoo Mok, Seongsik Park, Dongjin Lee, Hyeokjun Choe, Sungroh Yoon

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We thoroughly demonstrate the effectiveness of Auto SNN on various datasets including neuromorphic datasets. ... We evaluated the SNNs searched by Auto SNN on two types of datasets: static datasets (CIFAR10, CIFAR100 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and Tiny-Image Net-2002) and neuromorphic datasets (CIFAR10-DVS (Li et al., 2017) and DVS128Gesture (Amir et al., 2017)).
Researcher Affiliation Collaboration 1Samsung Advanced Institute of Technology, South Korea 2Department of Electric and Computer Engineering, Seoul National University, South Korea 3Korea Institute of Science and Technology, South Korea 4Interdisciplinary Program in Artificial Intelligence, Seoul National University, South Korea.
Pseudocode Yes Algorithm 1 Evolutionary search algorithm of Auto SNN
Open Source Code Yes The code of Auto SNN is available at https://github.com/nabk89/Auto SNN. ... We implemented Auto SNN and all the experiments using Spiking Jelly3, and have included the codes in the supplementary materials (code.zip).
Open Datasets Yes CIFAR10 (Krizhevsky et al., 2009), CIFAR100 (Krizhevsky et al., 2009), SVHN (Netzer et al., 2011), and Tiny-Image Net-2004) and neuronmorphic datasets (CIFAR10-DVS (Li et al., 2017) and DVS128-Gesture (Amir et al., 2017)).
Dataset Splits Yes The dataset is divided into 8:2 for Dtrain and Dval. ... The training data of CIFAR10 were divided into 8:2 for Dtrain and Dval, which were used to train the super-network and evaluate candidate architectures during the spike-aware evolutionary search, respectively.
Hardware Specification Yes on a single NVIDIA 2080ti GPU. ... on a single Ge Force RTX 2080 Ti GPU
Software Dependencies No We implemented Auto SNN and all the experiments using Spiking Jelly3, and have included the codes in the supplementary materials (code.zip).
Experiment Setup Yes We use the Adam optimizer (Kingma & Ba, 2015) with a learning rate of 0.001 and cutout data augmentation (De Vries & Taylor, 2017) to train the super-network and the searched SNNs for 600 epochs on a single NVIDIA 2080ti GPU. For all architectures, we use PLIF neurons (Fang et al., 2021b) with Vth = 0, Vreset = 0, 8 timesteps, and an initial τ of 2.