Signed Neuron with Memory: Towards Simple, Accurate and High-Efficient ANN-SNN Conversion
Authors: Yuchen Wang, Malu Zhang, Yi Chen, Hong Qu
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct experiments on challenging datasets including CIFAR10 (95.44% top-1), CIFAR100 (78.3% top-1) and Image Net (73.16% top-1). Experimental results demonstrate that the proposed method outperforms the state-of-the-art works in terms of accuracy and inference time. |
| Researcher Affiliation | Academia | Yuchen Wang , Malu Zhang , Yi Chen and Hong Qu School of Computer Science and Engineering, University of Electronic Science and Technology of China yuchenwang@std.uestc.edu.cn, maluzhang@uestc.edu.cn, chenyi@std.uestc.edu.cn, hongqu@uestc.edu.cn |
| Pseudocode | No | The paper describes mathematical equations for the neuron model (Eq. 7-11) but does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | Yes | The code is available at https://github.com/ppppps/ ANN2SNNConversion SNM Neuron Norm. |
| Open Datasets | Yes | We use VGG and Res Net network structures to conduct experiments on CIFAR10, CIFAR1001, and Image Net20122 datasets. ... 1https://www.cs.toronto.edu/ kriz/cifar.html 2https://image-net.org/challenges/LSVRC/2012/ |
| Dataset Splits | No | The paper mentions using a "training set" to obtain the maximum activation value for neuron-wise normalization, but it does not specify any dataset splits (e.g., percentages or counts) for training, validation, or testing. |
| Hardware Specification | No | The paper discusses energy estimation using theoretical values for multiplication and addition operations (e.g., 4.6 pJ and 0.9 pJ from [Horowitz, 2014]), but it does not specify any concrete hardware details such as exact GPU/CPU models, processor types, or memory used for running its experiments. |
| Software Dependencies | No | The paper mentions general algorithms and techniques like SGD and Kaiming normal initialization but does not provide specific software dependencies with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, Python 3.x). |
| Experiment Setup | Yes | The initialization of ANN network parameters adopts Kaiming normal initialization [He et al., 2015], and the network training adopts SGD algorithm with 0.9 momentum followed by milestones learning rate decay. The L2 penalty with a value of 5e 4 is also added. |