Removing Inter-Experimental Variability from Functional Data in Systems Neuroscience

Authors: Dominic Gonschorek, Larissa Höfling, Klaudia P. Szatko, Katrin Franke, Timm Schubert, Benjamin Dunn, Philipp Berens, David Klindt, Thomas Euler

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We compare our method to previous approaches on a large-scale dataset of two-photon imaging recordings of retinal bipolar cell responses to visual stimuli. This dataset provides a unique benchmark as it contains biological signal from well-defined cell types that is obscured by large inter-experimental variability. In a supervised setting, we compare the generalization performance of cell type classifiers across experiments, which we validate with anatomical cell type distributions from electron microscopy data. In an unsupervised setting, we remove inter-experimental variability from data which can then be fed into arbitrary downstream analyses.
Researcher Affiliation Academia 1 University of Tübingen, 2 Norwegian University of Science and Technology.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks clearly labeled as such.
Open Source Code Yes Code available at https://github.com/eulerlab/rave.
Open Datasets Yes To test our model approach, we use two datasets of two-photon imaging recordings [38 40] from the 14 mouse retinal bipolar cell (BC) [41] types responses to two visual stimuli, a local and full-field chirp stimulus (Figure 3). ... In our study, we refer to these two datasets as A [2] and B [5] (for further preprocessing see Appendix).
Dataset Splits Yes We randomly split the data into training, validation and test set and train all models with empirical risk minimization.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No All of our models are implemented and optimized in Py Torch [43]. (While PyTorch is mentioned, no specific version number is provided).
Experiment Setup Yes Model weights are trained with stochastic gradient descent using one instance of the Adam optimizer [44] for the outer minimization of f and g... and then a second instance of Adam for the inner minimization of h... We optimize hyperparameters through random search [45] on the validation set... In the random search, we test different learning rates for both optimizers, and also different training schedules. We additionally search over depth, width and drop-out rate for each of the neural networks pf, g, hq, as well as the trade-off parameter λ introduced in equation (3).