Constants of motion network

Authors: Muhammad Firmansyah Kasim, Yi Heng Lim

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our implementation and experiments can be found in the public domain1. [...] To demonstrate the capability of COMET to simultaneously learn both the dynamics and the constants of motion, we tested it in a variety of cases. [...] For each case, we compared the performance of COMET with other methods: (1) simple neural ODE (NODE) [10], (2) Hamiltonian neural network (HNN) [6] with the coordinates given in each case below, (3) neural symplectic form (NSF) [7], and (4) Lagrangian neural network (LNN) [8].
Researcher Affiliation Industry M. F. Kasim & Y. H. Lim Machine Discovery Ltd. Oxford, United Kingdom {muhammad, yi.heng}@machine-discovery.com
Pseudocode No The paper describes the computational procedures mathematically and in prose, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Our implementation and experiments can be found in the public domain1. 1https://github.com/machine-discovery/comet/
Open Datasets No For all the cases in this section, the training data were generated by simulating the dynamics of the system from t = 0 to t = 10. From the simulations, we collected the states s as well as the states rate of change, ˆ s, which were calculated analytically and were added a Gaussian noise with standard deviation σ = 0.05.
Dataset Splits No The paper describes the generation of training and test data, but it does not specify a validation dataset split or a validation phase in the experimental setup.
Hardware Specification Yes The training was done as described in section 4 which takes about 5-7 hours on an NVIDIA T4 GPU.
Software Dependencies No The paper mentions
Experiment Setup Yes In order to train COMET, the loss function in this case is constructed as L = s ˆ s 2 + w1 s0 ˆ s 2 + w2 i=1 ci s0 2 , where w are the tunable regularization weights. [...] The neural network architecture for each method is detailed in appendix ??. [...] Specifically, COMET was trained in the damped pendulum, two body, and 2D nonlinear spring cases from section 4 without added noise and ran for 3000 epochs. [...] The neural network was constructed with 1D convolutional layers with kernel size 5 and circular padding, followed by logsigmoid activation function. The pattern above was repeated 4 times but without the activation function for the last one, using 250 channels in the hidden layers.