Extending Lagrangian and Hamiltonian Neural Networks with Differentiable Contact Models
Authors: Yaofeng Desmond Zhong, Biswadip Dey, Amit Chakraborty
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate this framework on a series of challenging 2D and 3D physical systems with different coefficients of restitution and friction. and 4 Experiments: dynamics and parameter learning |
| Researcher Affiliation | Industry | Siemens Technology, Princeton, NJ 08536, USA. {yaofeng.zhong, biswadip.dey, amit.chakraborty}@siemens.com |
| Pseudocode | Yes | Algorithm 1: Rigid Body Dynamics with Contact |
| Open Source Code | Yes | Code available at https://github.com/Physics-aware-AI/DiffCoSim. |
| Open Datasets | No | For each task, the training set is generated by randomly sampling 800 collision-free initial conditions and then simulating the dynamics for 100 time steps. (The paper describes how they generated their data but does not provide access to a public or open dataset.) |
| Dataset Splits | No | The evaluation and test set are generated in a similar way with 100 trajectories, respectively. and We vary the training sample size from 25 to 800 trajectories and report the validation loss (L1-norm). (The paper mentions 'validation loss' and 'evaluation set' but does not provide clear specific split percentages or exact sample counts for training, validation, and test sets to ensure reproducibility of data partitioning.) |
| Hardware Specification | No | The paper does not provide specific details on the hardware used for experiments, such as GPU/CPU models or cloud computing specifications. |
| Software Dependencies | No | Our implementation relies on publicly available codebases including Pytorch [52], CHNN [17], Symplectic ODE-Net [14] and Neural ODE [18]. We handle training using Pytorch Lightning [53] for the purpose of reproducibility. (No version numbers are provided for these software components.) |
| Experiment Setup | Yes | We use RK4 as the ODE solver in Neural ODE. We compute the L1-norm of the difference between predicted and true trajectories, and use it as the loss function for training. The gradients are computed by differentiating through Algorithm 1, and learnable parameters are updated using the Adam W optimizer [50, 51] with a learning rate of 0.001. |