Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

On the Intrinsic Structures of Spiking Neural Networks

Authors: Shao-Qun Zhang, Jia-Yi Chen, Jin-Hui Wu, Gao Zhang, Huan Xiong, Bin Gu, Zhi-Hua Zhou

JMLR 2024 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical experiments conducted on various real-world datasets affirm the effectiveness of our proposed methods.
Researcher Affiliation Academia 1 National Key Laboratory for Novel Software Technology, Nanjing University, China 2 School of Intelligent Science and Technology, Nanjing University, China 3 School of Artificial Intelligence, Nanjing University, China 4 School of Mathematical Sciences, Jiangsu Second Normal University, China 5 Department of Machine Learning, Mohamed bin Zayed University of Artificial Intelligence, UAE
Pseudocode Yes Algorithm 1 Algorithmic Calculation for Upper Bounds of H(n).
Open Source Code No The paper does not explicitly state that the authors' implementation code is open-source or provide a link to a repository. Mentions of licenses refer to the paper itself, not associated code.
Open Datasets Yes The Neuromorphic-MNIST (N-MNIST) data set1 (Orchard et al., 2015) is a spiking version of the original frame-based MNIST data set. ... The CIFAR10-DVS data set (Li et al., 2017) is an event-stream conversion of CIFAR-10 ... The DVS128-Gesture data set (Amir et al., 2017) comprises 1,342 instances ... The MNIST handwritten digit data set2 comprises a training set of 60,000 examples ... The Fashion-MNIST data set3 consists of a training set of 60,000 examples ... The Extended MNIST-Balanced (EMNIST) (Cohen et al., 2017) data set is an extension of MNIST ... The CIFAR-10 data set (Krizhevsky and Hinton, 2009) consists of 60000 32x32 color images ... The CIFAR-100 data set is just like the CIFAR-10, except it has 100 classes ...
Dataset Splits Yes N-MNIST: It consists of the same 60,000 training and 10,000 testing samples as the original MNIST data set ... MNIST: comprises a training set of 60,000 examples and a testing set of 10,000 examples ... Fashion-MNIST: consists of a training set of 60,000 examples and a testing set of 10,000 examples. ... EMNIST: contains ... 112,800 training and 18,800 testing samples for 47 classes. ... CIFAR-10: consists of 60000 32x32 color images in 10 classes, with 50000 training images and 10000 test images. ... CIFAR-100: each class contains 600 (500 training and 100 testings) images.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'efficient software packages for their training and deployment' but does not specify any particular software names with version numbers that would be required to replicate the experiments.
Experiment Setup Yes Table 3: Hyper-parameter setting of the proposed SNNs on image recognition. (Lists Batch Size, Encoding Length T, Expect Spike Count (True/False), Firing Threshold, Learning Rate η, Excitation Probability Threshold pθ, Maximum Time, Membrane Time τm, Time Constant of Synapse τs, Time Step τs)