Symplectic ODE-Net: Learning Hamiltonian Dynamics with Control
Authors: Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty
ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this task, we use the model described in Section 3.2 and present the predicted trajectories of the learned models as well as the learned functions of Sym ODEN. Table 1 shows the train error and the prediction error per trajectory of the two models. We can see Unstructured Sym ODEN performs better than HNN. |
| Researcher Affiliation | Collaboration | Yaofeng Desmond Zhong Princeton University y.zhong@princeton.edu Biswadip Dey Siemens Corporate Technology biswadip.dey@siemens.com Amit Chakraborty Siemens Corporate Technology amit.chakraborty@siemens.com |
| Pseudocode | No | The paper does not contain any explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code for the Sym ODEN framework and experiments is available at https://github.com/d-biswa/Symplectic-ODENet. |
| Open Datasets | Yes | All the other tasks deal with embedded angle data and velocity directly, so we use Open AI Gym (Brockman et al., 2016) simulators to generate trajectory data. |
| Dataset Splits | No | The paper describes training and testing sets, but does not explicitly mention a separate validation set or its specific split for hyperparameter tuning. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., CPU/GPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions software like 'Open AI Gym' and 'Adam optimizer' and methods like 'RK4', but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | In all the tasks, we train our model using Adam optimizer (Kingma & Ba, 2014) with 1000 epochs. We set a time horizon τ = 3, and choose RK4 as the numerical integration scheme in Neural ODE. We vary the size of the training set by doubling from 16 initial state conditions to 1024 initial state conditions. Each initial state condition is combined with five constant control u = 2.0, 1.0, 0.0, 1.0, 2.0 to produce initial condition for simulation. Each trajectory is generated by integrating the dynamics 20 time steps forward. We set the size of mini-batches to be the number of initial state conditions. |