Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Faster and Stronger: When ANN-SNN Conversion Meets Parallel Spiking Calculation
Authors: Zecheng Hao, Qichao Ma, Kang Chen, Yi Zhang, Zhaofei Yu, Tiejun Huang
ICML 2025 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have confirmed the significant performance advantages of our method for various conversion cases under ultra-low time latency. ... Experiments have demonstrated the superior performance of our method for both conventional and training-free conversion. For example, we achieve a top-1 accuracy of 72.90% on Image Net-1k, Res Net-34 within merely 4 time-steps. |
| Researcher Affiliation | Collaboration | 1School of Computer Science, Peking University, Beijing, China 2China Mobile Research Institute, Beijing, China 3Institute for Artificial Intelligence, Peking University, Beijing, China. |
| Pseudocode | Yes | Algorithm 1 The overall pseudo-code for universal parallel conversion |
| Open Source Code | Yes | Code is available at https://github.com/hzc1208/Parallel_Conversion. |
| Open Datasets | Yes | Consistent with previous conversion learning works, we conduct performance validation on CIFAR (Krizhevsky et al., 2009) and Image Net (Deng et al., 2009) datasets by using two types of network backbones, VGG (Simonyan & Zisserman, 2014) and Res Net (He et al., 2016). |
| Dataset Splits | Yes | Regarding the error calibration technique, we utilize the training dataset as the calibration data to iterate for 1 epoch. |
| Hardware Specification | Yes | The inference speeds in Fig.2 and Fig.S1 are measured on a single NVIDIA RTX 4090 GPU. |
| Software Dependencies | No | The paper mentions software components like SGD optimizer, Cosine Annealing, and data augmentation techniques but does not specify their version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | For pretrained QCFS ANN models, we use SGD optimizer (Bottou, 2012), the optimization strategy of Cosine Annealing (Loshchilov & Hutter, 2017) and data augmentation techniques (De Vries & Taylor, 2017; Cubuk et al., 2019), the corresponding hyper-parameter settings are: lr = 0.1, wd = 5 10 4 for CIFAR-10, lr = 0.02, wd = 5 10 4 for CIFAR-100 and lr = 0.1, wd = 1 10 4 for Image Net-1k. ... The learning momentum α mentioned in Algorithm 1 is set to 0.99. |