LEADS: Learning Dynamical Systems that Generalize Across Environments

Authors: Yuan Yin, Ibrahim Ayed, Emmanuel de Bézenac, Nicolas Baskiotis, Patrick Gallinari

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We instantiate this framework for neural networks and evaluate it experimentally on representative families of nonlinear dynamics. We show that this new setting can exploit knowledge extracted from environment-dependent data and improves generalization for both known and novel environments. Our experiments are conducted on three families of dynamical systems described by three broad classes of differential equations.
Researcher Affiliation Collaboration 1Sorbonne Université, Paris, France 2There SIS Lab, Thales, Paris, France 3Criteo AI Lab, Paris, France
Pseudocode No The paper describes the framework and optimization problem mathematically but does not include a pseudocode block or an algorithm.
Open Source Code Yes Code is available at https://github.com/yuan-yin/LEADS.
Open Datasets No LV and GS data are simulated with the DOPRI5 solver in Num Py [10, 13]. NS data is simulated with the pseudo-spectral method as in [19]. The authors simulated their own data and do not provide a link or citation to a public dataset.
Dataset Splits No The paper specifies training and test data sets and their sizes but does not explicitly mention a distinct validation set split.
Hardware Specification Yes All experiments are performed with a single NVIDIA Titan Xp GPU.
Software Dependencies No The paper mentions software like Num Py and optimizers like Adam, and numerical methods (RK4, Euler), but it does not specify version numbers for any of these software dependencies.
Experiment Setup Yes We used 4-layer MLPs for LV, 4-layer Conv Nets for GS and Fourier Neural Operator (FNO) [19] for NS. We apply an exponential Scheduled Sampling [17] with exponent of 0.99 to stabilize the training. We use the Adam optimizer [15] with the same learning rate 10 3 and (β1, β2) = (0.9, 0.999) across the experiments. For the hyperparamters in Eq. 8, we chose respectively λ = 5 103, 102, 105 and α = 10 3, 10 2, 10 5 for LV, GS and NS.