Synaptic Sampling: A Bayesian Approach to Neural Network Plasticity and Rewiring

Authors: David Kappel, Stefan Habenschuss, Robert Legenstein, Wolfgang Maass

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In simulations we show that our model for synaptic plasticity allows spiking neural networks to compensate continuously for unforeseen disturbances. In computer simulations, we demonstrate another advantage of the synaptic sampling framework: It endows neural circuits with an inherent robustness against perturbations. Here, we consider a network that allows us to study the self-organization of connections between hidden neurons.
Researcher Affiliation Academia David Kappel1 Stefan Habenschuss1 Robert Legenstein Wolfgang Maass Institute for Theoretical Computer Science Graz University of Technology A-8010 Graz, Austria [kappel, habenschuss, legi, maass]@igi.tugraz.at
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks. It primarily presents mathematical equations and descriptions of the model.
Open Source Code No The paper does not provide any links to open-source code repositories or explicitly state that the code for the methodology is being released.
Open Datasets Yes The network was trained by repeatedly drawing random instances of spoken and written digits of the same type (digit 1 or 2 taken from MNIST and 7 utterances of speaker 1 from TI 46) and simultaneously presenting Poisson spiking representations of these input patterns to the network.
Dataset Splits No The paper mentions a "test set" for evaluation but does not explicitly describe a separate "validation" dataset or how the data was split into training, validation, and testing portions.
Hardware Specification No The paper describes the neural network models and simulations but does not specify the hardware used to run these experiments (e.g., CPU/GPU models, memory, or computational resources).
Software Dependencies No The paper discusses the mathematical framework and model components (e.g., stochastic spike response neurons, Poisson processes) but does not list specific software dependencies with version numbers (e.g., Python, TensorFlow, PyTorch versions) required for reproduction.
Experiment Setup Yes In the simulations we used µ = 0.5, σ = 1 and θ0 = 3. In our simulations we used a double-exponential kernel with time constants τm = 20ms and τs = 2ms [18]. The instantaneous firing rate ρk(t) of network neuron k depends exponentially on the membrane potential and is subject to divisive lateral inhibition Ilat(t) (described below): ρk(t) = ρnet Ilat(t) exp(uk(t)), where ρnet = 100Hz scales the firing rate of neurons [16]. The network was trained by repeatedly drawing random instances of spoken and written digits... The network was trained by repeatedly drawing random instances of spoken and written digits... within a learning session of 8 hours (of equivalent biological time).