Gradient Descent for Spiking Neural Networks

Authors: Dongsung Huh, Terrence J. Sejnowski

NeurIPS 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For demonstration, we trained recurrent spiking networks on two dynamic tasks: one that requires optimizing fast ( millisecond) spike-based interactions for efficient encoding of information, and a delayed-memory task over extended duration ( second). The results show that the gradient descent approach indeed optimizes networks dynamics on the time scale of individual spikes as well as on behavioral time scales.
Researcher Affiliation Academia Dongsung Huh Salk Institute La Jolla, CA 92037 huh@salk.edu Terrence J. Sejnowski Salk Institute La Jolla, CA 92037 terry@salk.edu
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing code for the work described, nor does it provide a direct link to a source-code repository.
Open Datasets No The paper describes using 'Randomly generated sum-of-sinusoid signals' and 'binary pulse signals' as input, but does not provide concrete access information (link, DOI, repository, formal citation with authors/year) for these specific datasets to be publicly available.
Dataset Splits No The paper mentions drawing 'mini-batches of 50 training examples' but does not provide specific dataset split information (exact percentages, sample counts, citations to predefined splits, or detailed splitting methodology) for training, validation, or test sets.
Hardware Specification No The paper does not provide specific hardware details (exact GPU/CPU models, processor types with speeds, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4) needed to replicate the experiment.
Experiment Setup Yes We used a network of 30 NIF neurons, 2 input and 2 output channels. We also introduce two different synaptic time constants, as proposed in [35, 36]: a fast constant "t" = 1 ms for synapses for the recurrent connections, and a slow constant "ts" = 10 ms for readout. cost function l that penalizes the readout error and the overall synaptic activity: l = o od 2 + ">".s 2 where od(t) is the desired output, and ">" is a regularization parameter. We trained a network of 80 quadratic integrate and fire (QIF) neurons... Time constants of "tv" = 25, "tf" = 5, and "t" = 20 ms were used