SigGPDE: Scaling Sparse Gaussian Processes on Sequential Data
Authors: Maud Lemercier, Cristopher Salvi, Thomas Cass, Edwin V. Bonilla, Theodoros Damoulas, Terry J Lyons
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We showcase the significant computational gains of Sig GPDE compared to existing methods, while achieving state-of-the-art performance for classification tasks on large datasets of up to 1 million multivariate time series. |
| Researcher Affiliation | Academia | 1University of Warwick and Alan Turing Institute 2University of Oxford and Alan Turing Institute 3Imperial College London and Alan Turing Institute 4CSIRO s Data61 and The University of Sydney. |
| Pseudocode | Yes | Algorithm 1 Backpropagation for kθ(X, X) via PDE (41) |
| Open Source Code | No | The paper states 'All code is written in Tensor Flow using GPFlow', but does not explicitly provide a link or statement that the code for Sig GPDE itself is open-source or available. |
| Open Datasets | Yes | We use a mixture of UEA & UCR time series datasets (timeseriesclassification.com) and real world data for the final example. |
| Dataset Splits | Yes | For each dataset all models are trained 3 times using a random training-validation split. The validation split is used to monitor the NLPP when optimizing the hyperparameters of the models. |
| Hardware Specification | No | No specific hardware details (like GPU/CPU models, memory, or cloud instance types) used for experiments are mentioned in the paper. |
| Software Dependencies | No | The paper states 'All code is written in Tensor Flow using GPFlow', but does not provide specific version numbers for these software dependencies. |
| Experiment Setup | Yes | For GPSig-IS, we use inducing sequences of length ℓ= 5 as recommended in Toth & Oberhauser (2020). We made use of M = 500 inducing features. |