Distilling Governing Laws and Source Input for Dynamical Systems from Videos
Authors: Lele Luan, Yang Liu, Hao Sun
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on simulated dynamical scenes show that the proposed method is able to distill closed-form governing equations and simultaneously identify unknown excitation input for several dynamical systems recorded by videos, which fills in the gap in literature where no existing methods are available and applicable for solving this type of problem. |
| Researcher Affiliation | Academia | Lele Luan1 , Yang Liu2 , Hao Sun3 1Department of Civil and Environmental Engineering, Northeastern University, Boston, MA, USA 2School of Engineering Sciences, University of the Chinese Academy of Sciences, Beijing, China 3Gaoling School of Artificial Intelligence, Renmin University of China, Beijing, 100872, China |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. It provides a schematic diagram (Figure 2) to illustrate the network architecture. |
| Open Source Code | Yes | Source codes for implementing this network are found in https://github.com/Lele Luan/Video Discovery. |
| Open Datasets | No | The paper states: 'The studied videos are generated by plotting the lumped mass(es) according to the simulated physical trajectories on different backgrounds. More details of the dataset generation are given in SI Section 1.' However, it does not provide a specific link, DOI, repository name, or formal citation for accessing this generated dataset. |
| Dataset Splits | No | The paper does not provide specific training/test/validation dataset splits. It only mentions a 'multi-step training strategy' and 'pre-training'. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, or memory) used for running its experiments. It does not mention any hardware specifications. |
| Software Dependencies | No | The paper mentions methods and network components like U-Net, Softmax, Runge-Kutta, and SINDy, but it does not specify any software libraries, frameworks, or their version numbers that were used to implement these components (e.g., PyTorch, TensorFlow, SciPy). |
| Experiment Setup | No | The paper mentions that hyperparameters and training strategy details are provided in the Supplementary Information (SI Section 2.3 and SI Section 2.4), but these specific experimental setup details (e.g., concrete hyperparameter values, batch sizes, learning rates) are not present in the main text of the paper. |