System Identification with Biophysical Constraints: A Circuit Model of the Inner Retina
Authors: Cornelius Schröder, David Klindt, Sarah Strauss, Katrin Franke, Matthias Bethge, Thomas Euler, Philipp Berens
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Fit to the responses of bipolar cells, the model generalized well to new stimuli including natural movie sequences, performing on par with or better than a benchmark black-box model. In pharmacology experiments, the model replicated in silico the effect of blocking specific amacrine cell populations with high fidelity, indicating that it had learned key circuit functions. Also, more in depth comparisons showed that connectivity patterns learned by the model were well matched to connectivity patterns extracted from connectomics data. |
| Researcher Affiliation | Academia | Cornelius Schröder University of Tübingen cornelius.schroeder@uni-tuebingen.de David Klindt University of Tübingen klindt.david@gmail.com Sarah Strauss University of Tübingen sarah.strauss@uni-tuebingen.de Katrin Franke University of Tübingen katrin.franke@cin.uni-tuebingen.de Matthias Bethge University of Tübingen matthias@bethgelab.org Thomas Euler University of Tübingen thomas.euler@cin.uni-tuebingen.de Philipp Berens University of Tübingen philipp.berens@uni-tuebingen.de |
| Pseudocode | No | No pseudocode or algorithm blocks are present in the paper. |
| Open Source Code | Yes | Code available at https://github.com/berenslab/bc_network. |
| Open Datasets | No | The paper uses data from recordings described in [5]: "As data, we used light-evoked responses of BCs recorded with the genetically encoded glutamate sensor iGluSnFr [5] (c.f. Appendix B)." However, it does not provide a direct link, DOI, or repository for public access to the specific processed dataset used in this paper. The cited paper [5] indicates data is available upon request. |
| Dataset Splits | No | The paper mentions training data and hold-out data for generalization, but does not specify exact percentages, sample counts, or predefined splits for training, validation, and test sets. It differentiates between 'training data' (chirp stimuli) and 'hold out data' (natural movies, sine flickering) but lacks explicit split information. |
| Hardware Specification | No | No specific hardware details (e.g., GPU/CPU models, memory amounts, or computer specifications) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The paper states: "All models were written in PyTorch [28] and optimized using the Adam optimizer [29]". However, it does not provide specific version numbers for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | We minimized this loss function to train the LN and LSTM model.2 For the BCN model, we encouraged sparse connections between different types of neurons as observed in real EM data [14], by additionally adding a sparse penalty, minimizing the 1-norm of all connectivity matrices W j: Lsparsity = P j ||W j||1. Finally, we weighted the two terms and optimized Ltotal = Lcorrelation + βLsparsity. All models were written in PyTorch [28] and optimized using the Adam optimizer [29] (see Appendix C for details about hyper-parameter search and learning schedule). Appendix C states: "The models were trained using the Adam optimizer [29] with a learning rate of 10^-3 and a weight decay of 10^-5." and "The parameter β for the sparsity penalty was set to 10^-5. For the LSTM, we further used a gradient clipping of 10^-1. The batch size was set to 1 for the LSTM, otherwise to the length of the stimulus (5000 time steps)." |