Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
QP-SNN: Quantized and Pruned Spiking Neural Networks
Authors: Wenjie Wei, Malu Zhang, Zijian Zhou, Ammar Belatreche, Yimeng Shan, Yu Liang, Honglin Cao, Jieyuan Zhang, Yang Yang
ICLR 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments demonstrate that integrating two proposed methods into the baseline allows QP-SNN to achieve state-of-the-art performance and efficiency, underscoring its potential for enhancing SNN deployment in edge intelligence computing. |
| Researcher Affiliation | Academia | 1University of Electronic Science and Technology of China 2Northumbria University, 3Liaoning Technical University |
| Pseudocode | Yes | A THE OVERALL WORKFLOW OF QP-SNN We present the workflow of QP-SNN in Algorithm 1, which consists of two main steps: quantization and pruning. |
| Open Source Code | No | The paper does not provide explicit statements about releasing code, nor does it provide a link to a code repository. |
| Open Datasets | Yes | We first evaluate our method on image classification tasks, including static datasets like CIFAR (Krizhevsky et al. (2009)), Tiny Image Net, and Image Net-1k (Deng et al. (2009)), alongside neuromorphic DVS-CIFAR10 (Li et al. (2017)). |
| Dataset Splits | Yes | The CIFAR10 and CIFAR-100 are color image datasets, with each dataset containing 50,000 training images and 10,000 testing images. ... The Tiny Image Net dataset is a subset of the Image Net dataset, consisting of 200 categories, with each category containing 500 training images and 50 test images. |
| Hardware Specification | No | The paper discusses the applicability of SNNs in "resource-limited edge devices" and mentions existing neuromorphic hardware (SpiNNaker, TrueNorth, Loihi, Tianjic) as context, but it does not specify any particular hardware (GPU, CPU models, etc.) used to run the experiments described in the paper. |
| Software Dependencies | No | The paper mentions using specific methods like spatio-temporal backpropagation (STBP) and the straight-through estimator (STE) but does not provide specific version numbers for software libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used for implementation. |
| Experiment Setup | Yes | Table 8: Experimental setups. Hyper-parameter CIFAR-10/100 Tiny Image Net Image Net DVS-CIFAR10 Timestep 2, 4 4 4 10 Resolution 32 32 64 64 224 224 48 48 Batch size 256 256 256 64 Epoch (Train/Fine-tune) 300 / 150 300 / 150 320 / 200 300 / 150 Optimizer (Train/Fine-tune) SGD / Adam SGD / Adam SGD / SGD SGD / Adam Initial lr (Train/Fine-tune) 0.1 / 0.001 0.1 / 0.001 0.1 / 0.05 0.1 / 0.001 Learning rate decay Cosine Cosine Cosine Cosine |