Pruning of Deep Spiking Neural Networks through Gradient Rewiring

Authors: Yanqi Chen, Zhaofei Yu, Wei Fang, Tiejun Huang, Yonghong Tian

IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The experimental results show that the proposed method achieves minimal loss of SNNs performance on MNIST and CIFAR-10 datasets so far.
Researcher Affiliation Academia Yanqi Chen1,3 , Zhaofei Yu1,2,3 , Wei Fang1,3 , Tiejun Huang1,2,3 and Yonghong Tian1,3 1Department of Computer Science and Technology, Peking University 2Institute for Artificial Intelligence, Peking University 3Peng Cheng Laboratory
Pseudocode Yes Algorithm 1: Gradient Rewiring with SGD
Open Source Code Yes Our codes are available at https://github. com/Yanqi-Chen/Gradient-Rewiring.
Open Datasets Yes We evaluate the performance of the Grad R algorithm on image recognition benchmarks, namely the MNIST and CIFAR10 datasets.
Dataset Splits No The paper mentions training and test sets but does not explicitly provide details about a separate validation set split or its size/percentage.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Our implementation of deep SNN are based on our open-source SNN framework Spiking Jelly [Fang et al., 2020a]' and the use of 'Adam optimizer' but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes The choice of all hyperparameters is shown in Table 3 and Table 4. Table 3 lists 'N # Epoch 2048', 'Batch Size 16', 'T # Timestep 8', 'τm Membrane Constant 2.0', 'uth Firing Threshold 1.0', 'urest Resting Potential 0.0', 'p Target Sparsity 95%', 'η Learning Rate 1e-4', 'β1, β2 Adam Decay 0.9, 0.999' for CIFAR-10, and different values for MNIST.