Adaptive Whitening in Neural Populations with Gain-modulating Interneurons

Authors: Lyndon Duong, David Lipshutz, David Heeger, Dmitri Chklovskii, Eero P Simoncelli

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate numerically that sign-constraining the gains improves robustness of the network to ill-conditioned inputs, and a generalization of the circuit achieves a form of local whitening in convolutional populations, such as those found throughout the visual or auditory systems. We evaluate the performance of our adaptive whitening algorithm using the matrix operator norm, Op, which measures the largest eigenvalue, Error := Cyy IN Op.
Researcher Affiliation Academia 1Center for Neural Science, New York University; 2Center for Computational Neuroscience, Flatiron Institute; 3Neuroscience Institute, NYU School of Medicine. Correspondence to: Lyndon R. Duong <lyndon.duong@nyu.edu>, David Lipshutz <dlipshutz@flatironinstitute.org>.
Pseudocode Yes Algorithm 1 Adaptive whitening via gain modulation 1: Input: Centered inputs x1, x2, RN 2: Initialize: W RN K; g RK; η, γ > 0 3: for t = 1, 2, . . . do 4: yt 0 5: while not converged do 6: zt W yt 7: yt yt + γ {xt W(g zt) yt} 8: end while 9: g g + η z 2 t 1 10: end for
Open Source Code Yes Python code for this study can be located at github.com/lyndond/frame_whitening.
Open Datasets Yes We simulate an experiment of visual gaze fixations and micro-saccadic eye movements using a Gaussian random walk, drawing 12 12 patch samples from a region of a natural image (Figure 6A; van Hateren & van der Schaaf, 1998);
Dataset Splits No The paper describes online adaptive algorithms and simulations without explicit train/validation/test splits in the traditional machine learning sense. For example, it mentions
Hardware Specification No The paper does not provide specific details about the hardware used for running experiments, such as GPU/CPU models, memory, or cloud resources.
Software Dependencies No The paper mentions "Python code" being available but does not specify any software dependencies with version numbers (e.g., specific deep learning frameworks like PyTorch or TensorFlow, or other libraries).
Experiment Setup Yes Algorithm 1 mentions initializing `η, γ > 0`. In numerical experiments, specific values are given: "For each network, η=1E-2.", "N=2, K=KN=3, η=2E-3", "N=6; K=KN=21; η=1E-3", "N=2, K=3, η=0.02", "ζ = 10 3. We update the gains g according to Algorithm 1 with η = 10ζ".