Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Bifurcation Spiking Neural Network
Authors: Shao-Qun Zhang, Zhao-Yu Zhang, Zhi-Hua Zhou
JMLR 2021 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we conducted experiments on four benchmark data sets to evaluate the functional performance of BSNN. The experiments are performed to discuss the following questions: Q1: Is the performance of BSNN comparable with state-of-the-art SNNs? Q2: Does the performance of BSNN surpass that of alternating optimization, especially in terms of accuracy and efficiency? Q3: Concerning BSNN, is the performance robust to the control rate? In which conditions? |
| Researcher Affiliation | Academia | Shao-Qun Zhang EMAIL Zhao-Yu Zhang EMAIL Zhi-Hua Zhou EMAIL National Key Laboratory for Novel Software Technology Nanjing University Nanjing 210023, China |
| Pseudocode | No | The paper describes procedures in prose, for example, under "4.2 Approaches for Parameterizing Control Rates" it lists steps like "Initialization", "Update connection weights", "Update γ". However, it does not present these as a formally labeled "Pseudocode" or "Algorithm" block with structured code-like formatting. |
| Open Source Code | No | The paper does not provide an explicit statement about releasing its own source code for the methodology described, nor does it include a link to a code repository. While it mentions using third-party tools like SLAYER, it does not confirm the availability of their implementation. |
| Open Datasets | Yes | Data Sets: (1) The MNIST handwritten digit data set1 comprises a training set of 60,000 examples and a testing set of 10,000 examples in 10 classes, where each example is centered in a 28 28 image. Using Poisson encoding, we produce a list of spike signals with a formation of 784 T binary matrices, where T denotes the encoding length and each row represents a spike sequence at each pixel. (2) The Neuromorphic-MNIST (N-MNIST) data set2 (Orchard et al., 2015) is a spiking version of the original frame-based MNIST data set. (3) The Fashion-MNIST data set3 consists of a training set of 60,000 examples and a testing set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. (4) The Extended MNIST-Balanced (EMNIST) (Cohen et al., 2017) data set is an extension of MNIST to handwritten, which contains handwritten upper & lower case letters of the English alphabet in addition to the digits, and comprises 112,800 training and 18,800 testing samples for 47 classes. 1. http://yann.lecun.com/exdb/mnist/ 2. https://www.garrickorchard.com/datasets/n-mnist 3. https://www.kaggle.com/zalando-research/fashionmnist |
| Dataset Splits | Yes | The MNIST handwritten digit data set comprises a training set of 60,000 examples and a testing set of 10,000 examples in 10 classes... The Neuromorphic-MNIST (N-MNIST) data set... consists of the same 60,000 training and 10,000 testing samples as the original MNIST data set... The Fashion-MNIST data set consists of a training set of 60,000 examples and a testing set of 10,000 examples... The Extended MNIST-Balanced (EMNIST)... comprises 112,800 training and 18,800 testing samples for 47 classes. |
| Hardware Specification | No | The paper does not provide specific details regarding the hardware used for running its experiments (e.g., GPU models, CPU types, or memory specifications). |
| Software Dependencies | No | The paper mentions employing "SLAYER (Shrestha and Orchard, 2018)" as a basic model for updating connection weights, but it does not specify any software names with version numbers for reproducibility (e.g., Python, PyTorch/TensorFlow versions, or SLAYER version). |
| Experiment Setup | Yes | Table 1: Parameter Setting of BSNN on Various Data Sets. Parameters Value MNIST N-MNIST Fashion-MNIST EMNIST Batch Size 32 32 32 64 Encoding Length T 300 300 400 400 Expect Spike Count(True) 100 80 100 140 Expect Spike Count(False) 10 5 10 0 Learning Rate η 0.01 0.01 0.001 0.01 Refractory Period 2 ms 1 ms 2 ms 2 ms Time Constant of Synapse τs 8 ms 8 ms 8 ms 8 ms |