Addressing the speed-accuracy simulation trade-off for adaptive spiking neurons

Authors: Luke Taylor, Andrew King, Nicol S Harper

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We computationally validate our implementation to obtain over a 50 training speedup using small DTs on synthetic benchmarks. We also obtained a comparable performance to the standard ALIF implementation on different supervised classification tasks yet in a fraction of the training time. Lastly, we showcase how our model makes it possible to quickly and accurately fit real electrophysiological recordings of cortical neurons, where very fine sub-millisecond DTs are crucial for capturing exact spike timing.
Researcher Affiliation Academia Luke Taylor Department of Physiology, Anatomy and Genetics University of Oxford Oxford, United Kingdom luke.taylor@hertford.ox.ac.uk Andrew J King Department of Physiology, Anatomy and Genetics University of Oxford Oxford, United Kingdom andrew.king@dpag.ox.ac.uk Nicol S Harper Department of Physiology, Anatomy and Genetics University of Oxford Oxford, United Kingdom nicol.harper@dpag.ox.ac.uk
Pseudocode No The paper describes algorithmic steps and mathematical formulations (e.g., in Section 3 and 3.1), but it does not include a clearly labeled "Pseudocode" or "Algorithm" block.
Open Source Code Yes Implementation details can be found in the Supplementary material and the code at https://github.com/webstorms/Blocks.
Open Datasets Yes We trained our accelerated ALIF SNN and the standard ALIF SNN on the Neuromophic-MNIST (N-MNIST) (using DT= 1ms) [65] and Spiking Heidelberg Digits (SHD) (using DT= 2ms) [66] spiking classification datasets... We explored the ability of our model to fit in vitro electrophysiological recordings from 146 inhibitory and excitatory neurons in mouse primary visual cortex (V1) (provided by the Allen Institute [69, 70]).
Dataset Splits No For the electrophysiological recordings, the paper states, 'For each neuron, we used half of the recordings for fitting and the other half for testing'. For N-MNIST and SHD, it mentions training and evaluating performance but does not specify exact training/validation/test splits, nor does it explicitly mention a separate validation set.
Hardware Specification No The paper mentions that its solution 'permits a more efficient parallelisation on GPUs' and that 'all the non-sequential operations can be run in parallel on a GPU'. However, it does not provide specific hardware details such as GPU or CPU models used for the experiments.
Software Dependencies No The paper mentions using a 'multi-Gaussian surrogate-gradient function' and refers to other SNN implementations like Norse [63] and Spikingjelly [64]. While it states that 'Implementation details can be found in the Supplementary material and the code at https://github.com/webstorms/Blocks', the main paper text does not provide specific software dependencies with version numbers.
Experiment Setup Yes In all experiments we employed identical model architectures consisting of two hidden layers (of 256 neurons each) and an additional integrator readout layer, with predictions taken from the readout neurons with maximal summated membrane potential over time (as commonly done [66, 68, 29, 42, 50]; see Supplementary material). We trained our model with different non-biological ARPs on each dataset... Using a DT= 0.1ms and an ARP= 2ms...