Variational Gaussian Process State-Space Models

Authors: Roger Frigola, Yutian Chen, Carl Edward Rasmussen

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 6 Experiments The goal of this section is to showcase the ability of variational GP-SSMs to perform approximate Bayesian learning of nonlinear dynamical systems. In particular, we want to demonstrate: 1) the ability to learn the inherent nonlinear dynamics of a system, 2) the application in cases where the latent states have higher dimensionality than the observations, and 3) the use of non-Gaussian likelihoods. 6.1 1D Nonlinear System We apply our variational learning procedure presented above to the one-dimensional nonlinear system described by p(xt+1|xt) = N(f(xt), 1) and p(yt|xt) = N(xt, 1) where the transition function is xt + 1 if x < 4 and 4xt + 21 if x 4. ... In Table 1, we compare our method (Variational GP-SSM) against the PMCMC sampling procedure from [8] taking 100 samples and 10 burn in samples.
Researcher Affiliation Academia Roger Frigola, Yutian Chen and Carl E. Rasmussen Department of Engineering University of Cambridge {rf342,yc373,cer54}@cam.ac.uk
Pseudocode Yes Algorithm 1 Variational learning of GP-SSMs with particle smoothing. Batch mode (i.e. non-SVI) is the particular case where the mini-batch is the whole dataset. Require: Observations y1:T . Initial values for θ, η1 and η2. Schedules for ρ and λ. i = 1. yτ:τ SAMPLEMINIBATCH(y1:T ) {xτ:τ }L l=1 GETSAMPLESOPTIMALQX(yτ:τ , θ, η1, η2) sample from eq. (12) θL GETTHETAGRADIENT({xτ:τ }L l=1, θ) supp. material η 1, η 2 GETOPTIMALQU({xτ:τ }L l=1, θ) eq. (11) or (14) η1 η1 + ρi(η 1 η1) η2 η2 + ρi(η 2 η2) θ θ + λi θL i i + 1 until ELBO convergence
Open Source Code No The paper does not provide any explicit statements about open-sourcing its code or links to a code repository.
Open Datasets Yes We use data in neuron cluster 1 (the most active) from experiment ec013.717 in [14]. [14] K. Mizuseki, A. Sirota, E. Pastalkova, K. Diba, and G. Buzski. Multiple single unit recordings from different rat hippocampal and entorhinal regions while the animals were performing multiple behavioral tasks. CRCNS.org. http://dx.doi.org/10.6080/K09G5JRZ, 2013.
Dataset Splits No The paper mentions training and test set sizes (e.g., T=500 for training, 105 data points for test), but it does not specify explicit training/validation/test dataset splits or validation procedures.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions various algorithms and models (e.g., Gaussian processes, sequential Monte Carlo) but does not provide specific version numbers for any software, libraries, or frameworks used in the experiments.
Experiment Setup Yes To solve the expectations with respect to the approximate smoothing distribution q(x) we use a bootstrap particle fixed-lag smoother with 1000 particles and a lag of 10. Smoothing is done with a fixed-lag particle smoother and training until convergence takes approximately 50 iterations of Algorithm 1. We use a Poisson likelihood relating the spike counts to the second latent state yt|xt Poisson(exp(αx(2) t + β)).