The computational and learning benefits of Daleian neural networks

Authors: Adam Haber, Elad Schneidman

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Here, we use models of recurrent spiking neural networks and rate-based networks to show, surprisingly, that despite the structural limitations on Daleian networks, they can approximate the computation performed by non-Daleian networks to a very high degree of accuracy. Moreover, we find that Daleian networks are more functionally robust to synaptic noise. We then show that unlike non-Daleian networks, Daleian ones can learn efficiently by tuning single neuron features, nearly as well as learning by tuning individual synaptic weights suggesting a simpler and more biologically plausible learning mechanism. We simulate the responses of large ensembles of Daleian and non-Daleian networks to rich sets of stimuli, and measure the functional similarity between them in terms of the overlap of the distributions of their spiking responses.
Researcher Affiliation Academia Adam Haber Department of Brain Sciences Weizmann Institute of Science Rehovot, Israel adam.haber@weizmann.ac.il Elad Schneidman Department of Brain Sciences Weizmann Institute of Science Rehovot, Israel elad.schneidman@weizmann.ac.il
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] Link to a repository containing the code will be attached as supplemental material.
Open Datasets Yes We used real spiking patterns of groups of 10 neurons recorded from the prefrontal cortex of macaque monkeys performing a visual classification task [29]
Dataset Splits No The paper describes how networks and stimuli were sampled and how 'learning' was performed (e.g., gradient descent), but it does not specify explicit training, validation, and test *data splits* in percentages or sample counts for reproducing the data partitioning.
Hardware Specification No All analysis was conducted on an internal cluster.
Software Dependencies No The paper mentions JAX [40] with a year (2018) but not a specific version number for the library itself. It does not list other software dependencies with version numbers.
Experiment Setup Yes For the non-Daleian (n D) networks, synapses were sampled from a Gaussian distribution W n D ij N(0, 1 p N ), where i, j 2 1 . . . N. For the Daleian (D) networks, half of the neurons were selected to be excitatory and half inhibitory, and the outgoing synaptic weights of each neuron were sampled from the positive or negative parts of a normal distribution, namely, W D ij |N(0, 1 p N )| (i), where (i) = 1 for excitatory neurons and 1 for inhibitory ones. The stimuli were sampled from a normal distribution, si N(0, 1) (i 2 1 . . . N). We used gradient descent on WD to minimize L(W D) = DJS W D, (x|s)|| W n D, (x|s) . We performed such learning for 2000 different non-Daleian networks W n D, each with a different stimulus s. ADAM optimizer [39] with a learning rate of 0.01