Targeted Neural Dynamical Modeling

Authors: Cole Hurwitz, Akash Srivastava, Kai Xu, Justin Jude, Matthew Perich, Lee Miller, Matthias Hennig

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We implement TNDM as a sequential variational autoencoder and validate it on simulated recordings and recordings taken from the premotor and motor cortex of a monkey performing a center-out reaching task.
Researcher Affiliation Collaboration Cole Hurwitz School of Informatics University of Edinburgh Edinburgh, Scotland, EH8 9AB colehurwitz@gmail.com Akash Srivastava MIT-IBM Watson AI Lab Cambridge, MA 02142, Akash.Srivastava@ibm.com
Pseudocode No The paper describes the generative model and inference process using equations and diagrams, but it does not include a clearly labeled pseudocode or algorithm block.
Open Source Code Yes The code for running and evaluating TNDM on real data can be found at https://github.com/HennigLab/tndm_paper. We also provide a Tensorflow2 re-implemention of TNDM at https://github.com/HennigLab/tndm.
Open Datasets Yes We apply TNDM to data gathered from a previously published monkey reaching experiment [7]. (Citation [7] points to: Gallego Juan A, Perich Matthew G, Chowdhury Raeed H, Solla Sara A, Miller Lee E. Long-term stability of cortical population dynamics underlying consistent behavior // Nature Neuroscience. 2020.)
Dataset Splits No The paper states 'Out of the 176 trials from the experiment, we use 80% for training (136 trials). We hold out the remaining 34 trials to test the models.' While a test set is explicitly defined, there is no separate validation set mentioned for hyperparameter tuning or model selection in the main text.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, such as GPU/CPU models, memory, or specific cloud instances.
Software Dependencies Yes To implement TNDM, we primarily adapt the original Tensorflow [1] implementation... We also provide a Tensorflow2 re-implemention of TNDM...
Experiment Setup Yes For all models, we perform a sweep over the number latent factors. For TNDM and PSID, we train models with all combinations of 1-5 relevant latent factors and 1-5 irrelevant factors... For LFADS, we train models with the number of latent factors ranging from 2-10. As TNDM and LFADS are both implemented as sequential variational autoencoders, we fix the architectures to be same for the two methods (64 units in the generators and encoder). We fix all shared hyperparameters to be the same between the two methods except for the dropout...