Non-reversible Gaussian processes for identifying latent dynamical structure in neural data

Authors: Virginia Rutten, Alberto Bernacchia, Maneesh Sahani, Guillaume Hennequin

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We apply GPFADS to synthetic data and show that it correctly recovers ground truth phase portraits. GPFADS also provides a probabilistic generalization of j PCA, a method originally developed for identifying latent rotational dynamics in neural data. When applied to monkey M1 neural recordings, GPFADS discovers latent trajectories with strong dynamical structure in the form of rotations.
Researcher Affiliation Collaboration Virginia M. S. Rutten Gatsby Computational Neuroscience Unit University College London, London, UK & Janelia Research Campus, HHMI Ashburn, VA, USA ruttenv@janelia.hhmi.org; Alberto Bernacchia Media Tek Research Cambourne Business Park Cambridge, UK alberto.bernacchia@mtkresearch.com; Maneesh Sahani Gatsby Computational Neuroscience Unit University College London London, UK maneesh@gatsby.ucl.ac.uk; Guillaume Hennequin Department of Engineering University of Cambridge Cambridge, UK g.hennequin@eng.cam.ac.uk
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a link to a code repository for the methodology described.
Open Datasets Yes We applied GPFADS to M1 population recordings performed in monkey during reaching (Fig. 4; Churchland et al., 2012); Churchland, M. M., Cunningham, J. P., Kaufman, M. T., Foster, J. D., Nuyujukian, P., Ryu, S. I., and Shenoy, K. V. (2012). Neural population dynamics during reaching. Nature, 487(7405):51–56.
Dataset Splits No The paper mentions 'independent splits of the 108 conditions into train and test sets' in the caption of Figure 4, and 'trained both GPFADS and GPFA with M = 4 latent dimensions on the same set of 50 trajectories' in Section 4.2, but does not provide specific percentages, absolute sample counts, or a detailed splitting methodology for training, validation, and test sets to ensure reproducibility.
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts, or detailed computer specifications) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup Yes For GPFADS, we used the kernel described in Eq. 15 with all fij( ) set to the squared-exponential kernel (with independent hyperparameters), and with the sum over (i, j) planes restricted to (1, 2) and (3, 4) i.e. two independent, orthogonal planes. For GPFA, we placed independent squared-exponential priors on each of the 4 latent dimensions (Yu et al., 2009). We note that the two models had the same number of parameters: GPFA had two more timescales than GPFADS, but the latter model had two learnable non-reversibility parameters α12 and α34.; As C was not constrained to be orthogonal we fixed ρ = 0, as any prior spatial correlations in a given plane could in this case be absorbed by a rotation of the corresponding two columns of C. Due to the smoothing of neural activity at pre-processing stage (which we did not control), we found that fitting GPFA(DS) was prone to so-called Heywood cases where some diagonal elements of R in Eq. 1 converge to very small values if allowed to (Heywood, 1931; Martin and Mc Donald, 1975). To circumvent this, here we constrained R I, but note that this issue would likely not arise in the analysis of single-trial, spiking data.