Discovering plasticity rules that organize and maintain neural circuits

Authors: David Bell, Alison Duffy, Adrienne Fairhall

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Candidate rules are simulated within initially random networks, and their fitness is evaluated according to a loss function that measures the fidelity with which the resulting dynamics encode time. We use this approach to introduce biological noise, forcing meta-learning to find robust solutions. We first show that, in the absence of perturbation, meta-learning identifies a temporally asymmetric generalization of Oja s rule that reliably organizes sparse sequential activity.
Researcher Affiliation Academia David Bell Department of Physics University of Washington Seattle, WA 98195 davidgbe@uw.edu Alison Duffy Department of Physiology and Biophysics University of Washington Seattle, WA 98195 Adrienne Fairhall Department of Physiology and Biophysics University of Washington Seattle, WA 98195
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Code referenced in supplementary section, Code Availability.
Open Datasets No The paper uses simulated networks and does not mention or provide access to a pre-existing publicly available dataset. Instead, it describes its own simulation setup: "We use a network of 25 E and 8 I threshold-linear neurons... Initial connectivity...was random...".
Dataset Splits No The paper describes how a decoder was trained and tested on 'activations' from the simulated networks, but it does not specify training, validation, and test splits for a general dataset used for model training in the conventional sense. "From the final 50 activations, six are chosen to train a decoder and six to test the representation by decoding time from neural activity."
Hardware Specification Yes Training E E plasticity (plasticity on all synapses) across 10 networks in batch typically required 24 (72) hours of compute on 30 Cascade Lake or Ice Lake Intel CPU cores to yield reasonable solutions.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers like Python 3.8, CPLEX 12.4).
Experiment Setup Yes We use a network of 25 E and 8 I threshold-linear neurons. Each neuron fired according to xj(t) = [Vj(t) b]+ , where Vj(t) evolves via τm Vj(t) = Vj(t) + P i wijxi(t). Here, wij is the weight of the synapse i j, τm is the membrane time constant, and b is the bias. Initial connectivity (Fig. 2d) was random... Each simulation is divided into 400 activations of 110 ms. At t = 10 ms of each activation, a single fixed neuron is driven by a strong kick of excitation [17]. Following this, all other neurons in the network receive Poisson distributed input for a period of 65 ms. ...We use CMA-ES to sample from the space of possible {ck, τk} and evaluate the loss at each point by simulating 10 randomly initialized networks under the given rule...