Understanding spiking networks through convex optimization

Authors: Allan Mancoo, Sander Keemink, Christian K. Machens

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the effect of the learning rules in a simple toy example, we trained an SNN with N = 50 neurons to reproduce a paraboloid in 3D-space, y = 0.3(x2 1 + x2 2), as shown in the inset of Fig. 3B(ii). During a learning epoch, the inputs were randomly sampled from one hundred equally spaced points in [ 4, 4] for each x-dimension. In each trial, a sampled input-target pair (x, y) was fed to the SNN for four seconds of simulation time (using the forward Euler method). To reduce the effect of spikes due to transients as the input changed across trials, we only started training 1s after the onset of the trial. We ran the algorithm for 100 epochs with each epoch covering the whole input space. Finally, we turned off the teaching signal and ran the network with learnt parameters. Fig. 3B(ii) shows the contours of the network readout (averaged over the last few time bins) which is a piecewise linear fit to the paraboloid. By color coding the background of the contour plot based on which neurons spike, we see a more distributed but still distinct partitioning of the input space after learning (contrast Fig. 3B(i) and B(ii)).
Researcher Affiliation Academia Allan Mancoo Champalimaud Centre for the Unknown, Lisbon, Portugal Ecole Normale Superieure, Paris, France allan.mancoo@neuro.fchampalimaud.org
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. Procedures are described in text or via mathematical equations.
Open Source Code Yes Source code is available at https://github.com/machenslab/spikes.
Open Datasets No The paper describes generating input data by random sampling or for classification tasks, but it does not provide concrete access information (link, DOI, repository, or citation) for a publicly available or open dataset.
Dataset Splits No The paper does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) needed to reproduce the data partitioning into train/validation/test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment, beyond mentioning the "forward Euler method" for simulation.
Experiment Setup Yes Network parameters: N = 300, λ = 2, D = 0.1G , µ = 0.1, σV = 0.1, synaptic delay = 2ms. Learning parameters: α = 0.1, λT = 0.001, both decaying across epochs according to exp( 0.001nepoch), for 750 epochs. Perturbation noise parameters: σstim = 0.1, σOU = 0.05, λOU = 10.