Space and time continuous physics simulation from partial observations

Authors: Steeven JANNY, Madiha Nadri, Julie Digne, Christian Wolf

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate on three standard datasets in fluid dynamics and compare to strong baselines, which are outperformed both in classical settings and in the extended new task requiring continuous predictions.
Researcher Affiliation Collaboration Steeven Janny LIRIS, INSA Lyon, France steeven.janny@insa-lyon.fr Madiha Nadri LAGEPP, Univ. Lyon 1, France madiha.nadri-wolf@univ-lyon1.fr Julie Digne LIRIS, CNRS, France julie.digne@cnrs.fr Christian Wolf Naver Labs Europe, France christian.wolf@naverlabs.com
Pseudocode No The paper describes the proposed algorithm in prose but does not provide structured pseudocode or an algorithm block.
Open Source Code No Code will be made public. Project page: https://continuous-pde.github.io/
Open Datasets Yes We evaluate on three standard datasets in fluid dynamics and compare to strong baselines, which are outperformed both in classical settings and in the extended new task requiring continuous predictions. Navier (Yin et al., 2022; Stokes, 2009), Shallow Water (Yin et al., 2022; Galewsky et al., 2004), Eagle (Janny et al., 2023). Both datasets are derived from the ones used in (Yin et al., 2022).
Dataset Splits Yes The Navier dataset comprises 256 training simulations of 40 frames each, with additional two times 64 simulations allocated for validation and testing. The Shallow Water dataset consists of 64 training simulations, along with 16 simulations in both validation and testing.
Hardware Specification No This work was performed using HPC resources from GENCI-IDRIS (Grant 2023-AD010614014).
Software Dependencies No We used our own implementation of the model in Py Torch
Experiment Setup Yes We used the Adam W optimizer with an initial learning rate of 10 3. Models were trained for 4,500 epochs, with a scheduled learning rate decay multiplied by 0.5 after 2,500; 3,000; 3,500; and 4,000 epochs. Applying gradient clipping to a value of 1 effectively prevented catastrophic spiking during training. The batch size was set to 16.