Generalizing to New Physical Systems via Context-Informed Dynamics Model

Authors: Matthieu Kirchmeyer, Yuan Yin, Jeremie Dona, Nicolas Baskiotis, Alain Rakotomamonjy, Patrick Gallinari

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate two variations of Co DA on several ODEs/PDEs representative of a variety of application domains, e.g. chemistry, biology, physics. Co DA achieves SOTA generalization results on in-domain and one-shot adaptation scenarios. We also illustrate how, with minimal supervision, Co DA infers accurately new system parameters from learned contexts. 5. Experiments We validate our approach on four classes of challenging nonlinear temporal and spatiotemporal physical dynamics, representative of various fields e.g. chemistry, biology and fluid dynamics. We evaluate in-domain and adaptation prediction performance and compare them to related baselines.
Researcher Affiliation Collaboration 1CNRS-ISIR, Sorbonne University, Paris, France 2Criteo AI Lab, Paris, France 3Universit e de Rouen, LITIS, France.
Pseudocode Yes Algorithm 1 Co DA Pseudo-code
Open Source Code Yes We provide our code at https://github.com/yuan-yin/Co DA.
Open Datasets Yes We generate trajectories on a temporal grid with t = 0.5 and temporal horizon T = 10. We sample on each training environment Ntr = 4 initial conditions for training from a uniform distribution p(X0) = Unif([1, 3]2). We sample on each training environment Ntr = 16 initial conditions for training from p(X0) as in Li et al. (2021).
Dataset Splits No The paper defines training environments (Etr), adaptation environments (Ead), and test trajectories, but does not explicitly mention a separate 'validation' split for hyperparameter tuning in the common sense.
Hardware Specification Yes All experiments are performed with a single NVIDIA Titan Xp GPU on an internal cluster.
Software Dependencies Yes We back-propagate through the solver with torchdiffeq (Chen, 2021)
Experiment Setup Yes We use the Adam optimizer (Kingma & Ba, 2015) with learning rate 10 3 and (β1, β2) = (0.9, 0.999). Hyperparameters We define hyperparameters for the following models: (a) Co DA: LV: λξ = 10 4, λℓ1 = 10 6, λℓ2 = 10 5 GO: λξ = 10 3, λℓ1 = 10 7, λℓ2 = 10 7 GS: λξ = 10 2, λℓ1 = 10 5, λℓ2 = 10 5 NS: λξ = 10 3, λℓ1 = 2 10 3, λℓ2 = 2 10 3