Online Variational Filtering and Parameter Learning

Authors: Andrew Campbell, Yuyang Shi, Thomas Rainforth, Arnaud Doucet

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the performance of this methodology across several examples, including high-dimensional SSMs and sequential Variational Auto-Encoders.
Researcher Affiliation Academia Andrew Campbell Yuyang Shi Tom Rainforth Arnaud Doucet Department of Statistics, University of Oxford, UK {campbell, yshi, rainforth, doucet}@stats.ox.ac.uk
Pseudocode Yes Algorithm 1: Online Variational Filtering and Parameter Learning.
Open Source Code Yes Code available at https://github.com/andrew-cr/online_var_fil
Open Datasets Yes We perform this experiment on a video sequence from a Deep Mind Lab environment [5] (GNU GPL license).
Dataset Splits No The paper does not provide specific details on dataset splits (e.g., percentages or sample counts) for training, validation, or testing.
Hardware Specification No The paper discusses computational cost and high-dimensional models but does not specify any hardware details like GPU models, CPU types, or memory used for the experiments.
Software Dependencies No The paper mentions using MLPs and KRR, but it does not specify any software versions (e.g., Python, PyTorch, TensorFlow, scikit-learn) required to reproduce the experiments.
Experiment Setup Yes For dx = dy = 10, we first demonstrate accurate state inference by learning φt at each time step whilst holding θ fixed at the true value. We represent ˆTt(xt) non-parametrically using KRR. Full details for all experiments are given in Appendix B.4. ... We reproduce the Chaotic Recurrent Neural Network (CRNN) example in [44], but with state dimension dx = 5, 20, and 100. ... We let qφt t (xt 1|xt) = N(xt 1; MLPφt t (xt), diag( σ2 t )) and qφt t (xt) = N xt; µt, diag(σ2 t ) where we use a 1-layer Multi-Layer Perceptron (MLP) with 100 neurons for each qφt t (xt 1|xt). ... where dx = 32, NNf θ is a residual MLP and NNg a convolutional neural network. ... We use the same qφt t parameterization as for the CRNN but with a 2 hidden layer MLP with 64 neurons.