Outlier-robust Kalman Filtering through Generalised Bayes

Authors: Gerardo Duran-Martin, Matias Altamirano, Alex Shestopaloff, Leandro Sánchez-Betancourt, Jeremias Knoblauch, Matt Jones, Francois-Xavier Briol, Kevin Patrick Murphy

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show this empirically on a range of filtering problems with outlier measurements, such as object tracking, state estimation in high-dimensional chaotic systems, and online learning of neural networks. ... In this section, we study the performance of the Wo LF methods in multiple filtering settings. Each experiment employs a dataset (or samples data from an SSM), a collection of benchmark methods, and a metric to compare the methods.
Researcher Affiliation Collaboration 1School of Mathematical Sciences, Queen Mary University, London, UK 2Oxford-Man Institute of Quantitative Finance, University of Oxford, UK 3Department of Statistical Science, University College London, London, United Kingdom 4Department of Mathematics and Statistics, Memorial University of Newfoundland, St. John s, NL, Canada 5Mathematical Institute, University of Oxford, UK 6Institute for Cognitive Science, University of Colorado Boulder, US 7Google Deep Mind.
Pseudocode Yes Algorithm 1 Wo LF predict and update step... Algorithm 2 Agamennoni et al. (2012) predict and update step for i.i.d. noise with I 1 inner iterations. ... Algorithm 3 Wang et al. (2018) predict and update step with I 1 inner iterations.
Open Source Code Yes Our code can be found at https://github.com/gerdm/weighted-likelihood-filter.
Open Datasets Yes For our robust baselines, we make use of three methods that are representative of recent state-of-the-art approaches to robust filtering... Each experiment employs a dataset (or samples data from an SSM)... The dataset is available at https://github.com/yaringal/Dropout Uncertainty Exps.
Dataset Splits Yes Each trial is carried out as follows: first, we randomly shuffle the rows in the dataset; second, we divide the dataset into a warmup dataset (10% of rows) and a corrupted dataset (remaining 90% of rows); third, we normalise the corrupted dataset using min-max normalisation from the warmup dataset
Hardware Specification No The paper does not provide specific hardware details (e.g., CPU, GPU models, or cloud instance types) used for running the experiments.
Software Dependencies No The paper mentions 'Jax (Bradbury et al., 2018)' but does not specify a version number for Jax or any other software dependencies used in the experiments.
Experiment Setup Yes The hyperparameters of each method are chosen on the first trial using the Bayesian optimisation (BO) package of Nogueira (2014). ... For the neural network fitting problem, we also consider a variant of online gradient descent (OGD) based on Adam (Kingma & Ba, 2017), which uses multiple inner iterations per step (measurement). ... In this experiment, the state dimension (number of parameters in the MLP) is m = (nin 20+20)+(20 1+1), where nin is the dimension of the feature xt. ... We set Qt = 10 4I, which allows the parameters to slowly drift over time and provides some reguralisation. ... In the En KF, the predict step samples (1) to obtain θt|t 1. Then, the update step samples predictions ŷt|t 1 Rd, for each particle, according to ŷt|t 1 N ht θt|t 1 , Rt .