Identification of Gaussian Process State Space Models

Authors: Stefanos Eleftheriadis, Tom Nicholson, Marc Deisenroth, James Hensman

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We benchmark the proposed GPSSM approach on data from one illustrative example and three challenging non-linear data sets of simulated and real data.
Researcher Affiliation Collaboration PROWLER.io, Imperial College London
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide an explicit statement about releasing its source code or a link to a code repository for the described methodology.
Open Datasets No The paper uses a synthetic dataset generated broadly according to (Frigola et al., 2014) and data from cart-pole and double pendulum systems from (Deisenroth and Rasmussen, 2011) and (Deisenroth et al., 2015) respectively, as well as real data from a hydraulic actuator (Sjöberg et al., 1995). However, it does not provide concrete access information (e.g., direct links, DOIs, or repository names) for these datasets.
Dataset Splits No The paper specifies train/test splits (e.g., 'train the GPSSM on half the sequence (512 steps) and evaluate the model on the remaining half', 'Training of the GPSSM was performed with data up to 14 episodes, while always demonstrating the learnt underlying dynamics on the last episode, which serves as the test set'), but does not explicitly mention a separate validation dataset split.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., CPU, GPU models, memory, or cloud instance types) used for running the experiments.
Software Dependencies Yes we used the Adam optimizer (Kingma and Ba, 2015). we use the implementations from GPflow (Matthews et al., 2017).
Experiment Setup Yes We used 20 inducing points (initialised uniformly across the range of the input data) for approximating the GP and 20 hidden units for the recurrent recognition model. The learning rate for the Adam optimiser was set to 10^-3.