Variational Data Assimilation with a Learned Inverse Observation Operator
Authors: Thomas Frerix, Dmitrii Kochkov, Jamie Smith, Daniel Cremers, Michael Brenner, Stephan Hoyer
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experimental results for the Lorenz96 model and a two-dimensional turbulent fluid flow demonstrate that this procedure significantly improves forecast quality for chaotic systems. |
| Researcher Affiliation | Collaboration | 1Google Research 2Technical University of Munich 3Harvard University. Correspondence to: Thomas Frerix <thomas.frerix@tum.de>, Stephan Hoyer <shoyer@google.com>. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1https://github.com/googleinterns/ invobs-data-assimilation |
| Open Datasets | No | We train on a dataset of 32000 independent observation trajectories with batch size 8 for 500 epochs. (No access information provided for this dataset.) |
| Dataset Splits | No | The paper mentions a training dataset and test trajectories ('on a set of 100 test trajectories') but does not specify a separate validation split or its size. |
| Hardware Specification | Yes | All models can be trained and optimized on a single NVIDIA V100 GPU. |
| Software Dependencies | No | The paper mentions using JAX, Flax, and the Adam optimizer but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | We implement our models for the approximate inverse in JAX (Bradbury et al., 2018) and use Flax as neural network library, with the Adam optimizer (Kingma & Ba, 2015) and learning rate of 10 3 for training1... We train on a dataset of 32000 independent observation trajectories with batch size 8 for 500 epochs. We use L-BFGS (Nocedal & Wright, 2006) as an optimizer for assimilation, retaining a history of 10 vectors for the Hessian approximation. |