Learning to Assimilate in Chaotic Dynamical Systems

Authors: Michael McCabe, Jed Brown

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results across several benchmark systems highlight the improved effectiveness of our approach over widely-used data assimilation methods. In Section 5, where we see that amortized assimilation methods match or outperform conventional approaches across several benchmark systems with especially strong performance at smaller ensemble sizes.
Researcher Affiliation Academia Michael Mc Cabe Department of Computer Science University of Colorado Boulder michael.mccabe@colorado.edu Jed Brown Department of Computer Science University of Colorado Boulder jed@jedbrown.org
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes Code available at https://github.com/mikemccabe210/amortizedassimilation.
Open Datasets No For all experiments, we generate a training set consisting of 6000 sequences consisting of 40 assimilation steps each. The paper uses benchmark systems (Lorenz 96, Kuramoto-Shivashinsky, Vissio-Lucarini 20) to generate data rather than providing access to a pre-existing public dataset.
Dataset Splits Yes For all experiments, we generate a training set consisting of 6000 sequences consisting of 40 assimilation steps each. The validation set consists of a single sequence of an additional 1000 steps and the test set is a further 10,000 steps.
Hardware Specification Yes Models are trained on a single GTX 1070 GPU for 500 epochs
Software Dependencies No Models are developed in Py Torch [55] using the torchdiffeq [56] library for ODE integration. We compare performance against a set of widely used filtering methods for data assimilation implemented in the Python DAPPER library [58]. The paper mentions software tools but does not provide specific version numbers for these dependencies.
Experiment Setup Yes Am En F models are developed in Py Torch [55] using the torchdiffeq [56] library for ODE integration. Models are trained on a single GTX 1070 GPU for 500 epochs using the Adam [12] optimizer with initial learning rate 8e 4 with a warm-up over 50 iterations followed by halving the learning rate every 200 iterations. All experiments are repeated over ten independent noise samples and error bars indicate a single standard deviation.