Adaptive whitening with fast gain modulation and slow synaptic plasticity
Authors: Lyndon Duong, Eero Simoncelli, Dmitri Chklovskii, David Lipshutz
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our model on synthetic and natural datasets and find that the synapses learn optimal configurations over long timescales that enable adaptive whitening on short timescales using gain modulation. and 5 Numerical experiments We test Alg. 1 on stimuli s1, s2, . . . drawn from slowly fluctuating latent contexts c1, c2, . . . |
| Researcher Affiliation | Academia | 1 Center for Computational Neuroscience, Flatiron Institute 2 Center for Neural Science, New York University 3 Neuroscience Institute, NYU Langone Medical School and {lyndon.duong, eero.simoncelli}@nyu.edu {dchklovskii, dlipshutz}@flatironinstitute.org |
| Pseudocode | Yes | Algorithm 1: Multi-timescale adaptive whitening via synaptic plasticity and gain modulation |
| Open Source Code | Yes | Python code accompanying this study can be found at https://github.com/lyndond/multi_timescale_whitening. |
| Open Datasets | Yes | We test our algorithm on 56 highresolution natural images [34] and [34] JH van Hateren and A van der Schaaf. Independent component filters of natural images compared with simple cells in primary visual cortex. Proceedings: Biological Sciences, 265 (1394):359 366, 1998. |
| Dataset Splits | No | We train our algorithm in the offline setting where we have direct access to the context-dependent covariance matrices (Appx. C, Alg. 2, α = 1, J = 50, ηg =5E-1, ηw =5E-2) with K = N = 25 and random W0 O(25) on a training set of 50 of the images, presented uniformly at random 1E3 total times. and We test the circuit with fixed synaptic weights WT and modulated (adaptive) gains g on stimuli from the held-out images. There's no explicit mention of a validation set or how the dataset was split for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Python code' in footnote 2 but does not list any specific software dependencies with version numbers. |
| Experiment Setup | Yes | We train our algorithm in the offline setting where we have direct access to the context-dependent covariance matrices (Appx. C, Alg. 2, α = 1, J = 50, ηg =5E-1, ηw =5E-2) with K = N = 25 and random W0 O(25) on a training set of 50 of the images, presented uniformly at random 1E3 total times. |