Spatio-temporal Bayesian On-line Changepoint Detection with Model Selection
Authors: Jeremias Knoblauch, Theodoros Damoulas
ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate performance with code available from https://github.com/alan-turing-institute/bocpdms in two parts. First, we compare to benchmark performances of GP-based models on real world data reported by Saatc i et al. (2010). This shows that as implied by Thm. 1, VARS are excellent approximations for a large variety of data streams. Next, we showcase BOCPDMS novelty in the multivariate setting. and Table 1. One-step-ahead predictive MSE and NLL of BOCPDMS compared to GP-based techniques, with 95% error bars. |
| Researcher Affiliation | Academia | 1Department of Statistics, University of Warwick, UK 2Department of Computer Science, University of Warwick, UK 3The Alan Turing Institute for Data Science & AI, UK. |
| Pseudocode | Yes | BOCPD with Model Selection (BOCPDMS) Input at time 0: model universe M; hazard H; prior q Input at time t: next observation yt Output at time t: by(t+1):(t+hmax), St, p(mt|y1:t) |
| Open Source Code | Yes | We evaluate performance with code available from https://github.com/alan-turing-institute/bocpdms in two parts. |
| Open Datasets | Yes | Monthly temperature averages 01/01/1880 01/01/2010 for the 21 longest-running stations across Europe are taken from http://www.ecad.eu/. |
| Dataset Splits | No | The paper refers to 'training period' and 'test set' in the context of hyperparameter optimization and comparison with other works, but does not provide specific details on dataset splits (e.g., percentages or sample counts) for training, validation, or testing. |
| Hardware Specification | No | The paper discusses computational performance and speed comparisons (e.g., 'BOCPDMS is > 60 faster than ARGPCP') but does not specify any particular hardware components like CPU models, GPU models, or memory used for its experiments. |
| Software Dependencies | No | The paper mentions using 'the software of Turner (2012)' but does not provide specific software dependencies with version numbers for its own implementation or experiments. |
| Experiment Setup | Yes | We use uniform model priors q, a constant Hazard functions H and gradient descent for hyperparameter optimization as in Section 4. The lag lengths of models in M are chosen based on Thm. 1 (3) and the rates of Hannan & Kavalieris (1986) for BVARS and Bayesian Autoregressions (BARS), respectively. and In our experiments, we employ this strategy using T1 = 1, T2 = T. |