Constrained Physical-Statistics Models for Dynamical System Identification and Prediction

Authors: Jérémie DONA, Marie Déchelle, patrick gallinari, Marina Levy

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental For more complex dynamics, we validate our framework experimentally. 4 EXPERIMENTS We validate Algorithm 1 on datasets of increasing difficulty (see appendix E), where the system state is either fully or partially observed (resp. section 4.1 and section 4.2). We no longer rely on an affine prior and explicit hk and hu for each dataset. Performances are evaluated via standard metrics: MSE (lower is better) and relative Mean Absolute Error (r MAE, lower is better). We assess the relevance of our proposition based on eqs. (4) and (5), against Neural ODE (Chen et al., 2018), Aphynity (Yin et al., 2021) and ablation studies. We denote Ours eq. (4) (resp. Ours eq. (5)) the results when ℓ= d(hk, f) i.e eq. (4), (resp. ℓ= d(hk, f pr k ) i.e. eq. (5)) When d(hk, f) (resp. d(hu, 0)) is not considered in the optimization, we refer to the results as d(h, f)+d(hu, 0) (resp. d(h, f)+d(hk, f)). When h is trained by only minimizing the discrepancy between actual and predicted trajectories the results are denoted Only d(h, f) . We report between brackets the standard deviation of the metrics over 5 runs and refer to Appendices F and G for training information and additional results.
Researcher Affiliation Collaboration Jérémie Donà 1 , Marie Déchelle 1, Marina Levy2, Patrick Gallinari1 3 1Sorbonne Université, CNRS, ISIR, F-75005 Paris, France 2Sorbonne Université, CNRS, LOCEAN-IPSL, F-75005 Paris, France 3Criteo AI Labs, Paris, France
Pseudocode Yes Algorithm 1 Alternate estimation: General Setting Result: Converged hk and hu Set h0 u = 0, h0 k = minhk Hk d(hk, f), tol R+ while d(h, f) > tol do hn+1 k = arg min hk Sk d(hk + hn u, f); hn+1 u = arg min hu Su d(hn+1 k + hu, f) (8) n n + 1 end
Open Source Code No The paper does not contain any explicit statement about releasing source code or a link to a code repository for the methodology described.
Open Datasets Yes Damped Pendulum (DPL) Now a standard benchmark for hybrid models, we consider the motion of a pendulum of length L damped due to viscous friction (Greydanus et al., 2019; Yin et al., 2021). ... Real Ocean Dynamics (Natl) We consider a dataset emulating real world observations of the North ATLantic ocean (denoted Natl) (Ajayi et al., 2019). Details available at : https://meom-group.github.io/swot-natl60/access-data.html
Dataset Splits Yes For both DPL and LV experiments, we consider the following setting: we sample the space of initial conditions building 100/50/50 trajectories for the train, validation and test sets. ... Dataset Generation Using computed U and S, we integrate eq. (36) with t = 8640s over 30 days, using a Semi-Lagrangian scheme (see explanations below). We generate 800/100/200 sequences respectively for train, validation and test sampling over the initial conditions... ... We sample 200/20/50 sequences of 1 year, for respectively train, validation and test.
Hardware Specification Yes All experiments were conducted on NVIDIA TITAN X GPU using Pytorch (Paszke et al., 2019).
Software Dependencies No The paper mentions 'Pytorch (Paszke et al., 2019)' but does not specify a version number for Pytorch or any other software dependencies with their versions, which is necessary for reproducibility.
Experiment Setup Yes F.1 DAMPED PENDULUM: 'For this dataset we use RMSProp optimizer with learning rate 0.0004 for 100 epochs with batch size 128.' F.2 LOTKA-VOLTERRA: 'We use Adam optimizer with learning rate 0.0005 for 200 epochs with batch size 128.' F.3 GEOPHYSICAL DATASETS: 'We use Adam optimizer with learning rate 0.0001 for 30 epochs with batch size 32.' The paper also details initialization and adjustment strategies for hyperparameters λh, λhk, and λhu for each dataset.