CSNN: An Augmented Spiking based Framework with Perceptron-Inception
Authors: Qi Xu, Yu Qi, Hang Yu, Jiangrong Shen, Huajin Tang, Gang Pan
IJCAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate the CSNN model on the MNIST and its variants, including learning capabilities, encoding mechanisms, robustness to noisy stimuli and its classification performance. The results show that CSNN behaves well compared to other cognitive models with significantly fewer neurons and training samples. |
| Researcher Affiliation | Academia | 1 College of Computer Science and Technology, Zhejiang University 2 Qiushi Academy for Advanced Studies, Zhejiang University 3 College of Computer Science, Sichuan University |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | We evaluate the CSNN model on three benchmark datasets, basic MNIST [L ecun et al., 1998], background MNIST [Larochelle et al., 2007], background-random MNIST [Larochelle et al., 2007] as shown in Figure 3. Each MNIST dataset consists of 28x28 grayscale images of handwritten digits from 0 to 9, the dataset is divided into two parts, 50000 training samples and 10000 test samples. |
| Dataset Splits | No | The paper mentions '50000 training samples and 10000 test samples' but does not specify a separate validation split or how validation was handled. |
| Hardware Specification | Yes | The experiments are run on a windows server equipped with two-processor Intel Xeon(R) Core CPU, and 64 GB main memory. The operating system is windows server 2012 R2 Standard. |
| Software Dependencies | No | The paper states: 'We use Matlab for training and testing the CSNN.' However, no specific version number for Matlab or any other software dependencies is provided. |
| Experiment Setup | Yes | The CSNN network architecture is (CNN)6C5@28x28-P2(SNN)F200-F100. In this CSNN network, the Perceptron consists of a partial CNN including convolutional and pooling layers. The Inception composes 2 fully-connected layers (F200 and F100), the convolutional layer of the Perceptron has 6 5x5 filters, followed by a max-pooling layer, which is followed by the Inception. The Inception adopts the LIF as the neuron model... The perceptron of CSNN is trained with batch size 10 for 100 training set and batch size 100 for the rest sizes of the training sets. |