Deep Spiking Neural Network with Neural Oscillation and Spike-Phase Information

Authors: Yi Chen, Hong Qu, Malu Zhang, Yuchen Wang7073-7080

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show that the proposed learning algorithm resolves the problems caused by the incompatibility between the BP learning algorithm and SNNs, and achieves state-of-the-art performance in single spike-based learning algorithms. Experiments and Results In this section, we first apply the RSN model to the XOR problem. Then the proposed learning algorithm is tested on multiple image data sets. We compare the proposed method with several recent results with the same or similar network sizes previously reported, including traditional ANNs, converted SNNs and different SNN s BP methods.
Researcher Affiliation Academia Yi Chen,1 Hong Qu,1 Malu Zhang,1,2 Yuchen Wang,1 1School of Computer Science and Engineering, University of Electronic Science and Technology of China, China 2Department of Electrical and Computer Engineering, National University of Singapore, Singapore {chenyi, wangyuchen}@std.uestc.edu.cn, hongqu@uestc.edu.cn, maluzhang@u.nus.edu
Pseudocode No The paper describes methods through mathematical equations and textual explanations, but no explicit pseudocode or algorithm blocks are provided.
Open Source Code No The paper does not provide an explicit statement or a link to open-source code for the described methodology.
Open Datasets Yes To demonstrate the capability of the proposed model and learning algorithm, we choose MNIST and CIFAR10, which are two commonly used datasets for benchmarking vision classification algorithms.
Dataset Splits No The paper mentions training on MNIST and CIFAR10 and epoch numbers, but it does not specify the train/validation/test splits (percentages, counts, or a standard split reference for validation) needed for reproducibility.
Hardware Specification Yes All reported experiments below are conducted on an NVIDIA 1080 GPU with Pytorch framework.
Software Dependencies No The paper mentions 'Pytorch framework' but does not provide specific version numbers for PyTorch or any other ancillary software dependencies, which is necessary for reproducible setup.
Experiment Setup Yes For the MNIST dataset, we adopted Adaptive moment estimation(Adam) as the optimizer and trained for 150 epochs. For the CIFAR10 dataset, we adopted Stochastic Gradient Descent(SGD) as the optimizer and trained for 200 epochs.