iLQR-VAE : control-based learning of input-driven dynamics with applications to neural data

Authors: Marine Schimel, Ta-Chu Kao, Kristopher T Jensen, Guillaume Hennequin

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the effectiveness of i LQRVAE on a range of synthetic systems, with autonomous as well as input-driven dynamics. We further apply it to neural and behavioural recordings in non-human primates performing two different reaching tasks, and show that i LQR-VAE yields high-quality kinematic reconstructions from the neural data.
Researcher Affiliation Academia Marine Schimel Department of Engineering University of Cambridge Cambridge, UK mmcs3@cam.ac.uk Ta-Chu Kao Gatsby Computational Neuroscience Unit University College London London, UK c.kao@ucl.ac.uk Kristopher T. Jensen Department of Engineering University of Cambridge Cambridge, UK ktj21@cam.ac.uk Guillaume Hennequin Department of Engineering University of Cambridge Cambridge, UK g.hennequin@eng.cam.ac.uk
Pseudocode Yes Algorithm 1 i LQRsolve(Cθ(u), uinit))
Open Source Code No The paper mentions using a third-party LFADS implementation: 'We used the LFADS implementation from https://github.com/google-research/ computation-thru-dynamics/tree/master/lfads_tutorial, which we modified to include linear dynamics and Gaussian likelihoods.' However, there is no explicit statement or link indicating that the authors' own code for iLQR-VAE is open-source or publicly available.
Open Datasets Yes To allow for direct comparison with benchmarks reported Pei et al. (2021), we first used data provided by the Neural Latents Benchmark (NLB) challenge, available at https://gui. dandiarchive.org/#/dandiset/000128.
Dataset Splits Yes We used 1720 training trials and 510 validation trials, which were drawn randomly for each instantiation of the model to avoid overfitting to test data.
Hardware Specification No Averaging over data samples can be easily parallelized; we do this here using the MPI library and a local CPU cluster. This work was performed using resources provided by the Cambridge Tier-2 system operated by the University of Cambridge Research Computing Service (http://www.hpc.cam.ac.uk) funded by EPSRC Tier-2 capital grant EP/P020259/1. While CPUs are mentioned, no specific models or detailed specifications are provided for the cluster hardware.
Software Dependencies No We optimize the ELBO using Adam (Kingma and Ba, 2014) with a decaying learning rate 1/ i where i is the iteration number. Averaging over data samples can be easily parallelized; we do this here using the MPI library and a local CPU cluster. Neither Adam nor MPI library versions are specified.
Experiment Setup Yes We optimized the ELBO using Adam (Kingma and Ba, 2014) with a decaying learning rate 1/ i where i is the iteration number. We optimized the model parameters with Adam, using (manually optimized) learning rates of 0.04/(1+ p k/1) for the free i LQR-VAE model, 0.04/(1+ p k/1 for autonomous i LQR-VAE and 0.02/(1+ p k/30 for LFADS, where k is the iteration number. For this experiment, we fitted i LQR-VAE to the neural activity using a model with MGU dynamics (n = 60), a Student prior over inputs (m = 15), and a Poisson likelihood (no = 182 neurons).