Explaining V1 Properties with a Biologically Constrained Deep Learning Architecture

Authors: Galen Pogoncheff, Jacob Granley, Michael Beyeler

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Upon enhancing task-driven CNNs with architectural components that simulate center-surround antagonism, local receptive fields, tuned normalization, and cortical magnification, we uncover models with latent representations that yield state-of-the-art explanation of V1 neural activity and tuning properties. Moreover, analyses of the learned parameters of these components and stimuli that maximally activate neurons of the evaluated networks provide support for their role in explaining neural properties of V1.
Researcher Affiliation Academia Galen Pogoncheff Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 galenpogoncheff@ucsb.edu Jacob Granley Department of Computer Science University of California, Santa Barbara Santa Barbara, CA 93106 jgranley@ucsb.edu Michael Beyeler Department of Computer Science Department of Psychological & Brain Sciences University of California, Santa Barbara Santa Barbara, CA 93106 mbeyeler@ucsb.edu
Pseudocode No The paper provides mathematical formulas and descriptions of architectural components but does not include structured pseudocode or an algorithm block.
Open Source Code Yes Code and materials required to reproduce the presented work are available at github.com/bionicvisionlab/2023-Pogoncheff-Explaining-V1-Properties.
Open Datasets Yes V1 alignment was evaluated for Image Net-trained models [55]. We additionally benchmarked each neuro-constrained model on the Tiny-Image Net-C dataset to study the effect of V1 alignment on object recognition robustness [56].
Dataset Splits Yes For all models, training and validation images were downsampled to a resolution of 64 64 in consideration of computational constraints.
Hardware Specification Yes Training was performed using single NVIDIA 3090 and A100 GPUs.
Software Dependencies No The paper mentions software like the 'python package lucent' and 'Free adversarial training' code, but does not specify version numbers for any key software components or libraries.
Experiment Setup Yes Each model of this evaluation was randomly initialized and trained for 100 epochs with an initial learning rate of 0.1 (reduced by a factor of 10 at epochs 60 and 80, where validation set performance was typically observed to plateau) and a batch size of 128.