Near Lossless Transfer Learning for Spiking Neural Networks
Authors: Zhanglu Yan, Jun Zhou, Weng-Fai Wong10577-10584
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We have implemented CQ training in CUDA-accelerated Py Torch version 1.6.0. The experiments were performed on a Intel Xeon E5-2680 server with 256GB DRAM and a Tesla P100 GPU... We tested our methods on different networks structures and datasets, and the results are summarized in Table 2. |
| Researcher Affiliation | Academia | Zhanglu Yan , Jun Zhou , Weng-Fai Wong Department of Computer Science, National University of Singapore {zhangluyan, zhoujun, wongwf}@comp.nus.edu.sg |
| Pseudocode | Yes | Algorithm 1 One iteration of CQ training. and Algorithm 2 Input encoding algorithm. |
| Open Source Code | Yes | The framework was developed in Py Torch and is publicly available.1 (Footnote 1: https://github.com/zhoujuncc1/shenjingcat) |
| Open Datasets | Yes | Using a 7 layer VGGand a 21 layer VGG-19, running on the CIFAR-10 dataset, we achieved 94.16% and 93.44% accuracy in the respective equivalent SNNs. MNIST consists of 60,000 28 28 grayscale images of handwritten digits from 0 to 9. The CIFAR-100 has the same structure as CIFAR-10 but with labels for 100 classes (Krizhevsky, Nair, and Hinton 2009). |
| Dataset Splits | No | MNIST consists of 60,000 28 28 grayscale images of handwritten digits from 0 to 9. 50,000 for training and 10,000 for testing. They are split to 50,000 training images and 10,000 test images. (No explicit mention of a validation set) |
| Hardware Specification | Yes | The experiments were performed on a Intel Xeon E5-2680 server with 256GB DRAM and a Tesla P100 GPU, running 64-bit Linux 4.15.0. |
| Software Dependencies | Yes | We have implemented CQ training in CUDA-accelerated Py Torch version 1.6.0. |
| Experiment Setup | Yes | We trained the networks using the Adam optimizer with an adaptive learning rate for 100 150 epochs until they converged. The length of the spike trains T were set between 100 to 1000 for Le Net*, VGG-11/*, VGG-13 and VGG-16/19 respectively. |