Neuronal Competition Groups with Supervised STDP for Spike-Based Classification

Authors: Gaspard Goupy, Pierre Tirilly, Ioan Marius Bilasco

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On top of two different unsupervised feature extractors, we obtain significant accuracy improvements on image recognition datasets such as CIFAR-10 and CIFAR-100. We show that our competition regulation mechanism is crucial for ensuring balanced competition and improved class separation.
Researcher Affiliation Academia Gaspard Goupy1, Pierre Tirilly1, and Ioan Marius Bilasco1,* 1Univ. Lille, CNRS, Centrale Lille, UMR 9189 CRISt AL, F-59000 Lille, France
Pseudocode Yes In Supplementary Material (Section 1), we provide the overall algorithm for training a spiking classification layer with our proposed methods.
Open Source Code Yes The source code is publicly available at: https://gitlab.univ-lille.fr/fox/snn-ncg.
Open Datasets Yes We select four image recognition datasets of growing complexity: MNIST [49], Fashion-MNIST [50], CIFAR-10 [51], and CIFAR-100 [51]. MNIST and Fashion-MNIST comprise 28 28 grayscale images, 60, 000 samples for training and 10, 000 for testing, categorized into 10 classes. CIFAR-10 and CIFAR-100 comprise 32 32 RGB images, 50, 000 for training and 10, 000 for testing. They consist of, respectively, 10 and 100 classes.
Dataset Splits Yes For hyperparameter optimization, we construct a validation set from the training set by randomly selecting, for each class, a percentage ν of its samples. Then, we use the gridsearch algorithm to optimize the hyperparameters of the spiking classification layer (for each rule, dataset, and feature extractor). For evaluation, we employ the K-fold cross-validation strategy. We divide the training set into K subsets and train K models, each using a different subset for validation while the remaining K 1 subsets are used for training. Each model is trained with a different seed. Then, we evaluate the trained models on the test set and we compute the mean test accuracy and standard deviation (1-sigma). We use ρ = 10, K = 10 and ν = 1 K (i.e. we allocate 10% of the training sets for validation).
Hardware Specification Yes Experiments presented in this paper were carried out using the Grid 5000 testbed, supported by a scientific interest group hosted by Inria and including CNRS, RENATER and several Universities as well as other organizations (see https://www.grid5000.fr). (Supplementary Material Section 2.4 further states: 'All experiments presented in this paper were carried out using the Grid 5000 testbed [58]. They were mainly run on CPU (Intel Xeon E5-2630 v3 @ 2.40GHz) but some of the longer experiments (i.e. CIFAR-100, ablation studies, hyperparameter optimization) were run on GPU (NVIDIA A100).')
Software Dependencies No The paper discusses various software components and libraries, but it does not provide specific version numbers for these software dependencies, which are necessary for reproducible descriptions within the provided text.
Experiment Setup Yes 5.1 Experimental Setup: Our classification system consist of a feature extractor trained with unsupervised Hebbian-based learning, followed by a spiking classification layer trained with supervised STDP. Unless otherwise specified, we set M = 5 neurons per class for NCG-based methods... For hyperparameter optimization, we construct a validation set from the training set by randomly selecting, for each class, a percentage ν of its samples. Then, we use the gridsearch algorithm to optimize the hyperparameters of the spiking classification layer... We use ρ = 10, K = 10 and ν = 1 K (i.e. we allocate 10% of the training sets for validation).