Parsing neural dynamics with infinite recurrent switching linear dynamical systems
Authors: Victor Geadah, International Brain Laboratory, Jonathan W. Pillow
ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We first validate and demonstrate the capabilities of our model on synthetic data. Next, we turn to the analysis of mice electrophysiological data during decision-making, and uncover strong non-stationary processes underlying both within-trial and trial-averaged neural activity. We train models by maximizing the Evidence Lower Bound (ELBO) using variational Laplace-EM from Zoltowski et al. (2020). We compare model performance using the marginal log likelihood (LL) log p( y1:T ) = Z p( y1:T |x1:T , z1:T )p(x1:T , z1:T )dx1:T dz1:T , which is the log-probability of held-out test data y1:T under a given model, where test data arises from a 4:1 train-to-test split of the full dataset (see details in Appendix B). The required integral is high-dimensional and intractable, and we thus resort to sequential Monte Carlo (SMC), also known as particle filtering, to compute it (Del Moral et al., 2006; Kantas et al., 2009). |
| Researcher Affiliation | Academia | Victor Geadah1, The International Brain Laboratory, Jonathan W. Pillow1,2 1Program in Applied and Computational Mathematics, Princeton University. 2Princeton Neuroscience Institute, Princeton University. {victor.geadah, pillow}@princeton.edu, info@internationalbrainlab.org |
| Pseudocode | Yes | Listing 1: Numba implementation of associative scan. Listing 2: Basic implementation of the transition matrix dynamics with heat-equation prior. |
| Open Source Code | Yes | The code for this work builds on the SSM package (Linderman et al., 2020), and we present in Appendix C.2 the relevant modules. Listing 1: Numba implementation of associative scan. Listing 2: Basic implementation of the transition matrix dynamics with heat-equation prior. |
| Open Datasets | Yes | We consider Neuropixels probe recordings from the Brainwide map data release (Laboratory, 2022). For our analyses, we projected spike train data onto the top principal components to obtain firing rates, making it amenable to analysis by state-space models with Gaussian emissions (Fig. 3B-C) (further methodological and data details can be found in Appendix B, including firing-rate and continuous latent dimensions). International Brain Laboratory. Data release Brainwide map Q4 2022, 11 2022. URL https://figshare.com/articles/preprint/Data_release_-_Brainwide_map_-_Q4_2022/21400815. |
| Dataset Splits | No | The paper mentions a 4:1 train-to-test split for the dataset but does not explicitly mention a separate validation split for reproduction. |
| Hardware Specification | Yes | All experiments were run on an external clusters. For reference, training on a single session for the ephys IBL data might take up 12 hours for a single model, with some multiprocessing. Training a single model for the synthetic NASCAR data can be accomplished on the order of 30 minutes. ... The experiments were ran on an AMD EPYC 7H12 64-Core Processor, on 10 cores. No GPU was used. |
| Software Dependencies | No | The paper mentions software like 'SSM package', 'Numba', 'autograd', and 'Numpy' but does not provide specific version numbers for these dependencies. |
| Experiment Setup | Yes | For all experiments, we use K = 4, D = 2 (xt dimension) and M = 10 (observation yt dimension). Finally set our internal states xn to be of dimension D = 4 for per-trial neural activity, and D = 2 for per-trial-averaged activity. We picked K = 8 discrete states ( see Appendix Tab. 3 for test marginal LL values for K {2, 4, 8}). For all experiments and results reported throughout the paper, we chose γ = 1.0 and κ = 0.4, we let t = 1.0, and we set j = 8γ t = 2 2 (for stability). All models were trained for 100 iterations, for every task. |