Learning Time-Invariant Representations for Individual Neurons from Population Dynamics

Authors: Lu Mi, Trung Le, Tianxing He, Eli Shlizerman, Uygar Sümbül

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our method on a public multimodal dataset of mouse cortical neuronal activity and transcriptomic labels. We report > 35% improvement in predicting the transcriptomic subclass identity and > 20% improvement in predicting class identity with respect to the state-of-the-art.
Researcher Affiliation Collaboration Lu Mi1,2, , Trung Le2, , Tianxing He2, Eli Shlizerman2, Uygar Sümbül1 1 Allen Institute for Brain Science 2 University of Washington {lu.mi,uygars}@alleninstitute.org {tle45, shlizee}@uw.edu goosehe@cs.washington.edu
Pseudocode Yes Appendix F Pseudo Code Our Neu PRINT framework includes three main components: an implicit dynamical system that uses the state-of-the-art transformer architecture to model neural dynamics; an optimization framework that fits the dynamical model and learns time-invariant representations for neurons; a supervised learning framework to train the downstream classifiers for subclass and class prediction, taking the learned time-invariant representations as inputs. The pseudo code for these three components is listed as follows:
Open Source Code Yes We released our software (https://github.com/lumimim/NeuPRINT/) for reproducibility.
Open Datasets Yes We use a recent, public multimodal dataset to train and demonstrate our model: Bugeon et al. [6] obtained population activity recordings from the mouse primary visual cortex (V1) via calcium imaging, followed by single-cell spatial transcriptomics of the tissue and registration of the two image sets to each other to identify the cells across the two experiments.
Dataset Splits Yes the neurons with subclass labels from all sessions are randomly split into train, validation and test neurons with a proportion of 80% : 10% : 10%.
Hardware Specification Yes All optimizations are performed on one NVIDIA Tesla V100 GPU.
Software Dependencies No The paper mentions software like PyTorch, scikit-learn, Adam optimizer, and suite2p, but it does not provide specific version numbers for these software dependencies (e.g., 'PyTorch 1.9' or 'scikit-learn 0.24').
Experiment Setup Yes Training details: For the objective function to predict the activity, we explore both mean squared error (MSE) and negative log likelihood (NLL) with a Gaussian distribution. To train the dynamical model and representation of neurons, we use a 64-dimensional embedding for the time-invariant representation. The temporal trial window size is 200 steps for the linear, nonlinear models, recurrent network and transformer. The batch size is 1024. We use the Adam optimizer [45] with a learning rate of 10-3.