Learning Stationary Time Series using Gaussian Processes with Nonparametric Kernels

Authors: Felipe Tobar, Thang D. Bui, Richard E. Turner

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The proposed GPCM is validated using synthetic and real-world signals. and 4 Experiments The DSE-GPCM was tested using synthetic data with known statistical properties and real-world signals.
Researcher Affiliation Academia Felipe Tobar ftobar@dim.uchile.cl Center for Mathematical Modeling Universidad de Chile Thang D. Bui tdb40@cam.ac.uk Department of Engineering University of Cambridge Richard E. Turner ret26@cam.ac.uk Department of Engineering University of Cambridge
Pseudocode No No pseudocode or algorithm blocks explicitly labeled as such were found.
Open Source Code No No explicit statement or link providing access to source code for the methodology described was found.
Open Datasets Yes We first analysed the Mauna Loa monthly CO2 concentration (de-trended). and The next experiment consisted of recovering the spectrum of an audio signal from the TIMIT corpus, composed of 1750 samples (at 16k Hz), only using an irregularly-sampled 20% of the available data.
Dataset Splits No The experiment then consisted of (i) learning the underlying kernel, (ii) estimating the latent process and (iii) performing imputation by removing observations in the region [-4.4, 4.4] (10% of the observations). and All methods used all the data
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) used for running experiments were mentioned.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) were mentioned.
Experiment Setup Yes We chose 88 inducing points for ux, that is, 1/10 of the samples to be recovered and 30 for uh; the hyperparameters in eq. (2) were set to γ = 0.45 and α = 0.1, so as to allow for an uninformative prior on h(t). The variational objective F was optimised with respect to the hyperparameter σh and the variational parameters µh, µx (means) and the Cholesky factors of Ch, Cx (covariances) using conjugate gradients.