Learning efficient task-dependent representations with synaptic plasticity

Authors: Colin Bredenberg, Eero Simoncelli, Cristina Savin

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here we construct a stochastic recurrent neural circuit model that can learn efficient, task-specific sensory codes using a novel form of reward-modulated Hebbian synaptic plasticity. We illustrate the flexibility of the model by training an initially unstructured neural network to solve two different tasks: stimulus estimation, and stimulus discrimination. The network achieves high performance in both tasks by appropriately allocating resources and using its recurrent circuitry to best compensate for different levels of noise. We also show how the interaction between stimulus priors and task structure dictates the emergent network representations. Numerical results. To simulate the realistic scenario of slowly changing input stimuli constantly driving the circuit, we sample inputs from a von Mises prior distribution using Langevin dynamics with a significantly slower time constant than that of the recurrent network dynamics, given by τs = 375. We set the noise level to an intermediate regime, so that its effects on circuit computation are non-negligible, but not pathological (σ = 0.2), and calibrate the hyperparameters that control the metabolic constraints (strength of regularization) to a level where they start to interfere with network performance.1 As learning progresses, our derived local plasticity rules quickly converge to a good solution, for both estimation and categorization (Fig. 1 b).
Researcher Affiliation Academia Colin Bredenberg Center for Neural Science New York University cjb617@nyu.edu Eero P. Simoncelli Center for Neural Science, Howard Hughes Medical Institute New York University eero.simoncelli@nyu.edu Cristina Savin Center for Neural Science, Center for Data Science New York University csavin@nyu.edu
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks. It presents mathematical equations and descriptions of the model and learning rules.
Open Source Code Yes 1For parameter values, see https://github.com/colinbredenberg/Efficient-Plasticity-Camera-Ready.
Open Datasets No The paper uses stimuli drawn from a von Mises distribution, which is a synthetic distribution generated for the simulation, not a named, publicly available dataset with concrete access information like a URL, DOI, or a citation to an established benchmark.
Dataset Splits No The paper describes sampling inputs and testing over 'time units' and 'test stimuli', but it does not specify explicit train/validation/test dataset splits in terms of percentages or sample counts, nor does it refer to standard predefined splits from established datasets.
Hardware Specification No The paper describes a theoretical neural circuit model and its simulation, but it does not provide any specific details about the computer hardware (e.g., CPU, GPU models, memory) used to perform these simulations or run experiments.
Software Dependencies No The paper mentions numerical integration methods (e.g., "Euler-Maruyama integration") but does not list any specific software libraries, frameworks, or their version numbers used in the implementation of the model or experiments.
Experiment Setup Yes We set the noise level to an intermediate regime, so that its effects on circuit computation are non-negligible, but not pathological (σ = 0.2), and calibrate the hyperparameters that control the metabolic constraints (strength of regularization) to a level where they start to interfere with network performance.1 ...To simulate the realistic scenario of slowly changing input stimuli constantly driving the circuit, we sample inputs from a von Mises prior distribution using Langevin dynamics with a significantly slower time constant than that of the recurrent network dynamics, given by τs = 375. ...1For parameter values, see https://github.com/colinbredenberg/Efficient-Plasticity-Camera-Ready.