Amortized Equation Discovery in Hybrid Dynamical Systems

Authors: Yongtuo Liu, Sara Magliacane, Miltiadis Kofinas, Stratis Gavves

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on four hybrid and six non-hybrid systems show that our method outperforms previous methods on equation discovery, segmentation, and forecasting.
Researcher Affiliation Academia 1University of Amsterdam. Correspondence to: Yongtuo Liu <y.liu6@uva.nl>.
Pseudocode No The paper describes the generative and inference models in detail using mathematical notation and text, but does not include any pseudocode or clearly labeled algorithm blocks.
Open Source Code Yes The code and datasets are available at https://github.com/yongtuoliu/Amortized-Equation Discovery-in-Hybrid-Dynamical-Systems.
Open Datasets Yes Specifically, we validate on single-object scenarios using the Mass-spring Hopper dataset, and the Susceptible, Infected and Recovered (SIR) disease dataset from Hybrid SINDy (Mangan et al., 2019). We validate on multi-object scenarios using the ODE-driven particle dataset and Salsadancing dataset from GRASS (Liu et al., 2023). Further, we test the robustness of our methods on non-hybrid systems using datasets of the Coupled linear, Cubic oscillator, Lorenz 63, Hopf bifurcation, Seklov glycolysis, and Duffing oscillator from Course & Nair (2023).
Dataset Splits Yes We scale up the datasets and sample 240 initial conditions from the ranges p0.5, 3q and p 1, 1q for positions a and velocities b, respectively. Among them, 200 samples are for training, 20 for validation, and 20 for testing. (Mass-spring Hopper) ...In summary, 4,928 samples are for training, 191 samples for validation, and 204 samples for testing. (ODE-driven Particle)
Hardware Specification Yes Each experiment is running on one Nvidia Ge Force RTX 3090 GPU.
Software Dependencies No The paper mentions using 'Adam optimizer' which implies PyTorch or TensorFlow, but does not provide specific version numbers for any software dependencies like Python, PyTorch, or TensorFlow.
Experiment Setup Yes We train all datasets with a fixed batch size of 40 for 20,000 training steps. We use the Adam optimizer with 10 5 weight-decay and clip gradients norm to 10. The learning rate is warmed up linearly from 5ˆ10 5 to 2 ˆ 10 4 for the first 2,000 steps, and then decays following a cosine manner with a rate of 0.99. ...dmin and dmax of the count variables are simply set as 20 and 50, respectively for all datasets. The number of edge types L is set as 2, containing one no-interaction type and one with-interaction type.