Distributional Gradient Matching for Learning Uncertain Neural Dynamics Models
Authors: Lenart Treven, Philippe Wenk, Florian Dorfler, Andreas Krause
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our experiments show that, compared to traditional approximate inference methods based on numerical integration, our approach is faster to train, faster at predicting previously unseen trajectories, and in the context of neural ODEs, significantly more accurate. |
| Researcher Affiliation | Academia | Lenart Treven ETH Zürich trevenl@ethz.ch Philippe Wenk ETH Zürich wenkph@ethz.ch Florian Dörfler ETH Zürich dorfler@ethz.ch Andreas Krause ETH Zürich krausea@ethz.ch |
| Pseudocode | No | No pseudocode or clearly labeled algorithm blocks were found in the paper. |
| Open Source Code | Yes | Code is available at: https://github.com/lenarttreven/dgm |
| Open Datasets | Yes | We use known parametric systems from the literature to generate simulated, noisy trajectories. For these benchmarks, we use the two-dimensional Lotka Volterra (LV) system, the three-dimensional, chaotic Lorenz (LO) system, a four-dimensional double pendulum (DP) and a twelve-dimensional quadrocopter (QU) model. For all systems, the exact equations and ground truth parameters are provided in the Appendix A. |
| Dataset Splits | No | The paper describes training and testing splits and procedures, but no explicit 'validation' set or its split information is provided. |
| Hardware Specification | Yes | For the one trajectory setting, all DGM related experiments were run on a Nvidia RTX 2080 Ti, where the longest ones took 15 minutes. The comparison methods were given 24h, on Intel Xeon Gold 6140 CPUs. For the multi-trajectory setting, we used Nvidia Titan RTX, where all experiments finished in less than 3 hours. |
| Software Dependencies | Yes | For all comparisons, we use the julia implementations of SGLD and SGHMC provided by Dandekar et al. (2021), the pytorch implementation of NDP provided by Norcliffe et al. (2021), and our own JAX (Bradbury et al., 2018) implementation of DGM. |
| Experiment Setup | Yes | In the interest of simplicity, we thus set it in all our experiments in Section 4 to a default value of λ = |D| / | X|... supplied both SGLD and SGHMC with very strong priors and fine-tuned them with an extensive hyperparameter sweep (see Appendix C for more details). |