Linear dynamical neural population models through nonlinear embeddings
Authors: Yuanjun Gao, Evan W. Archer, Liam Paninski, John P. Cunningham
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show that our techniques permit inference in a wide class of generative models.We also show in application to two neural datasets that, compared to state-of-the-art neural population models, f LDS captures a much larger proportion of neural variability with a small number of latent dimensions, providing superior predictive performance and interpretability. and 5 Experiments |
| Researcher Affiliation | Academia | Columbia University New York, NY, United States |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | A Python/Theano [26, 27] implementation of our algorithms is available at http://github.com/earcher/vilds. |
| Open Datasets | Yes | Macaque V1 with drifting grating stimulus with single orientation: The dataset consists of 148 neurons simultaneously recorded from the primary visual cortex (area V1) of an anesthetized macaque, as described in [20] (array 5). and Macaque center-out reaching data: We analyzed the neural population data recorded from the macaque motor cortex(G20040123), details of which can be found in [11, 1]. |
| Dataset Splits | Yes | For each orientation, we divide the data into 120 training trials and 30 testing trials. For Pf LDS we further divide the 120 training trials into 110 trials for ļ¬tting and 10 trials for validation (we use the ELBO on validation set to determine when to stop training). |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU models, CPU models, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Python/Theano' but does not specify version numbers for these software components. |
| Experiment Setup | Yes | When training a model using the AEVB algorithm, we run 500 epochs before stopping. and We analyze the spike activity from 300ms to 1200ms after stimulus onset. We discretize the data at t = 10ms, resulting in T = 90 timepoints per trial. |