Physics-Informed Regularization for Domain-Agnostic Dynamical System Modeling

Authors: Zijie Huang, Wanjia Zhao, Jingdong Gao, Ziniu Hu, Xiao Luo, Yadi Cao, Yuanzhou Chen, Yizhou Sun, Wei Wang

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental By integrating the TRS loss within neural ordinary differential equation models, the proposed model TREAT demonstrates superior performance on diverse physical systems. It achieves a significant 11.5% MSE improvement in a challenging chaotic triple-pendulum scenario, underscoring TREAT s broad applicability and effectiveness.
Researcher Affiliation Academia Zijie Huang1 Wanjia Zhao2 Jingdong Gao1 Ziniu Hu3 Xiao Luo1 Yadi Cao1 Yuanzhou Chen1 Yizhou Sun1 Wei Wang1 1University of California Los Angeles, 2Stanford University 3California Institute of Technology
Pseudocode Yes Appendix A.1 Implementation of the Time-Reversal Symmetry Loss. Algorithm 1 The implementation of Lreverse.
Open Source Code Yes Code and further details are available at here. (The link points to https://treat-ode.github.io/)
Open Datasets Yes For spring datasets and Pendulum, we generate irregular-sampled trajectories and set the training samples to be 20,000 and testing samples to be 5,000 respectively. For Attractor, We generate 1,000 and 50 trajectories for training and testing respectively following Huh et al. (2020). 10% of training samples are used as validation sets and the maximum trajectory prediction length is 60. Details can be found in Appendix C.
Dataset Splits Yes For spring datasets and Pendulum, we generate irregular-sampled trajectories and set the training samples to be 20,000 and testing samples to be 5,000 respectively. For Attractor, We generate 1,000 and 50 trajectories for training and testing respectively following Huh et al. (2020). 10% of training samples are used as validation sets and the maximum trajectory prediction length is 60. Details can be found in Appendix C.
Hardware Specification No The paper does not specify the exact hardware used (e.g., specific GPU or CPU models).
Software Dependencies No We implement our model in pytorch. [...] we use the Runge-Kutta method from torchdiffeq python package s(Chen et al., 2021). No specific version numbers are provided for these software dependencies.
Experiment Setup Yes We implement our model in pytorch. Encoder, generative model, and the decoder parameters are jointly optimized with Adam W optimizer (Loshchilov and Hutter, 2019) using a learning rate of 0.0001 for spring datasets and 0.00001 for Pendulum. The batch size for all datasets is set to 512.