Evolve Smoothly, Fit Consistently: Learning Smooth Latent Dynamics For Advection-Dominated Systems
Authors: Zhong Yi Wan, Leonardo Zepeda-Nunez, Anudhyan Boral, Fei Sha
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show the efficacy of our framework by learning models that generate accurate multi-step rollout predictions at much faster inference speed compared to competitors, for several challenging examples. Experiments performed on challenging examples show that our proposed method achieves top performance in both accuracy and speed. |
| Researcher Affiliation | Industry | Zhong Yi Wan Google Research Mountain View, CA 94043, USA wanzy@google.com; Leonardo Zepeda-N u Nez Google Research Mountain View, CA 94043, USA lzepedanunez@google.com; Anudhyan Boral Google Research Mountain View, CA 94043, USA anudhyan@google.com; Fei Sha Google Research Mountain View, CA 94043, USA fsha@google.com |
| Pseudocode | Yes | Algorithm 1 Approximating u(x, T) |
| Open Source Code | No | The paper does not provide an explicit link to source code for the described methodology or state that it is open-source/available. |
| Open Datasets | Yes | We used the spectral code in jax-cfd (Dresdner et al., 2022) to compute the datasets. |
| Dataset Splits | Yes | For each system, we use a training set of 1000 trajectories with at least 300 time steps each. For evaluation, trained models are then used to generate multi-step rollouts on 100 unseen initial conditions. |
| Hardware Specification | Yes | All training and inference runs are performed on single Nvidia V100 GPUs. |
| Software Dependencies | No | The method is implemented in JAX (Bradbury et al., 2018). We use the adaptive-step Dormand-Prince integrator (Dormand & Prince, 1980) implemented in scipy.integrate.solve ivp. For all training stages, we use the Adam optimizer (Kingma & Ba, 2015)... However, specific version numbers for these libraries are not provided. |
| Experiment Setup | Yes | For all training stages, we use the Adam optimizer (Kingma & Ba, 2015) with β1 = 0.9, β2 = 0.999 and ϵ = 10 8. Table 4: Training specifications for encoder learning. Table 5: Training specifications for dynamics learning. We additionally employ gradient clipping (scaling norm to 0.25) to help stabilize training. |