Neural system identification for large populations separating “what” and “where”

Authors: David Klindt, Alexander S. Ecker, Thomas Euler, Matthias Bethge

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate this architecture on ground-truth data to explore the challenges and limitations of CNN-based system identification. Moreover, we show that our network model outperforms current state-of-the art system identification models of mouse primary visual cortex.Our CNN with factorized readout outperformed all four baselines on all three scans (Table 1).
Researcher Affiliation Academia 1 Centre for Integrative Neuroscience, University of Tübingen, Germany 2 Bernstein Center for Computational Neuroscience, University of Tübingen, Germany 3 Institute for Ophthalmic Research, University of Tübingen, Germany 4 Institute for Theoretical Physics, University of Tübingen, Germany 5 Max Planck Institute for Biological Cybernetics, Tübingen, Germany 6 Center for Neuroscience and Artificial Intelligence, Baylor College of Medicine, Houston, USA
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code Yes The code to fit the models and reproduce the figures is available online at: https://github.com/david-klindt/NIPS2017
Open Datasets Yes Moreover, we show that our model outperforms the current state-of-the-art on a publicly available dataset of mouse V1 responses to natural images [19].
Dataset Splits Yes We use an initial learning rate of 0.001 and early stopping based on a separate validation set consisting of 20% of the training set. We fit all models using 80% of the training dataset for training and the remaining 20% for validation.
Hardware Specification No On this large dataset with 60.000 recordings from 8.000 neurons we were still able to fit the model on a single GPU and perform at 90% FEV (data not shown).
Software Dependencies No The paper mentions using the Adam optimizer and VGG-19 network, but does not provide specific version numbers for software dependencies like programming languages or deep learning frameworks.
Experiment Setup Yes We use an initial learning rate of 0.001 and early stopping based on a separate validation set consisting of 20% of the training set. When the validation error has not improved for 300 consecutive steps, we go back to the best parameter set and decrease the learning rate once by a factor of ten. After the second time we end the training. We find the optimal regularization weights λm and λw via grid search. We trained the model using Adam with a batch-size of 64.