Enforcing balance allows local supervised learning in spiking recurrent networks
Authors: Ralph Bourdoukan, Sophie Denève
NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | As a toy example, we simulated a 20-neuron network learning a 2D-damped oscillator using a feedback gain K = 100. The network is initialized with weak fast connections and weak slow connections. The learning is driven by smoothed gaussian noise as the command c. Note that in the initial state, because of the absence of fast recurrent connections, the output of the network does not depend linearly on the input because membrane potentials are hyperpolarized (Fig 3B). The network s output is quickly linearized through the learning of the fast connections (equation 2 by enforcing a balance on the membrane potential (Fig 3B): initial membrane potentials exhibit large fluctuations which reduce drastically after a few iterations (Fig 3B). On a slower time scale the slow connections learn to minimize the prediction error using the learning rule of equation 9. The error between the output of the network and the desired output decreases drastically (Fig 3B). To compute this error, different instances of the connectivity matrices were sampled during learning. The network was then re-simulated using these instances while fixing K=0 in oder to mesure the performance in the absence of feedback. At the end of learning the slow and fast connections converge to their predicted values Ws = F(A + λI)FT and Wf = FFT (Fig 3C). The presence of the feedback is no longer required for the network to have the right dynamics (i.e. we set K = 0 and obtain the desired output (Fig 3D and 3B). The output of the network is very accurate (representing the state x with a precision of the order of the contribution of a single spike), parsimonious (i.e. it does not spend more spikes than needed to represent the dynamical state with this level of accuracy) and the spike trains are asynchronous and irregular. |
| Researcher Affiliation | Academia | Ralph Bourdoukan Group For Neural Theory, ENS Paris Rue d Ulm, 29, Paris, France ralph.bourdoukan@gmail.com Sophie Deneve Group For Neural Theory, ENS Paris Rue d Ulm, 29, Paris, France sophie.deneve@ens.fr |
| Pseudocode | No | The paper does not contain any structured pseudocode or clearly labeled algorithm blocks. Learning rules are presented as mathematical equations. |
| Open Source Code | No | The paper does not provide any concrete access to source code, such as a repository link or an explicit statement about code release in supplementary materials. |
| Open Datasets | No | The paper describes simulating a network with "random input signals" and "smoothed gaussian noise" as the command. It does not use a pre-existing, publicly available dataset; instead, it generates data internally for simulation. |
| Dataset Splits | No | The paper describes a simulated environment where learning occurs and performance is measured. It does not provide specific training/validation/test dataset splits for a pre-defined dataset, as the input signals are generated for the simulation. |
| Hardware Specification | No | No specific hardware details (e.g., CPU, GPU models, memory, or specific computing environments) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., programming languages, libraries, or solvers). |
| Experiment Setup | Yes | Simulation parameters Figure 1 : λ = 0.05, β = 0.51, learning rate: 0.01. Figure 3 : λ = 50, λV = 1, β = 0.52, K = 100, learning rate of the fast connections: 0.03, learning rate of the slow connections: 0.15. |