Modeling Latent Neural Dynamics with Gaussian Process Switching Linear Dynamical Systems

Authors: Amber Hu, David Zoltowski, Aditya Nair, David Anderson, Lea Duncker, Scott Linderman

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply our method to synthetic data and data recorded in two neuroscience experiments and demonstrate favorable performance in comparison to the r SLDS.
Researcher Affiliation Academia Amber Hu Stanford University amberhu@stanford.edu David Zoltowski Stanford University dzoltow@stanford.edu Aditya Nair Caltech & Howard Hughes Medical Institute adi.nair@caltech.edu David Anderson Caltech & Howard Hughes Medical Institute wuwei@caltech.edu Lea Duncker Columbia University ld3149@columbia.edu Scott Linderman Stanford University swl1@stanford.edu
Pseudocode No The paper describes algorithms using mathematical equations and descriptive text, but it does not include any clearly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code Yes Our implementation of the gp SLDS is available at: https://github.com/lindermanlab/gpslds.
Open Datasets No The paper mentions reanalyzing datasets from previous work (Nair et al. [27], Stine et al. [42]) but does not provide direct links, DOIs, or specific repository names for public access to these datasets within the paper. For the first real data result, the NeurIPS checklist explicitly states, 'We do not provide data or code for the first real data result since that data has not been released to the public.'
Dataset Splits No For synthetic data, the paper describes simulation but no explicit splits. For real data, it states 'split the data into two trials' for organization, but does not specify standard training, validation, and test splits (e.g., percentages or sample counts) for model development or evaluation.
Hardware Specification Yes We fit all of our models on a NVIDIA A100 GPU on an internal computing cluster.
Software Dependencies No The paper mentions using 'modern autodifferentiation capabilities in JAX' but does not provide specific version numbers for JAX or any other software dependencies crucial for reproducing the experiments.
Experiment Setup Yes Each run was fit with 50 total v EM iterations; each iteration consisted of 15 forward-backward solves to update q(x) and 300 Adam gradient steps with a learning rate of 0.01 to update kernel hyperparameters.