Learning Stable Deep Dynamics Models
Authors: J. Zico Kolter, Gaurav Manek
NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Empirical results We illustrate our technique on several example problems, first highlighting the (inherent) stability of the method for random networks, demonstrating learning on simple n-link pendulum dynamics, and finally learning high-dimensional stable latent space dynamics for dynamic video textures via a VAE model. |
| Researcher Affiliation | Collaboration | Gaurav Manek Department of Computer Science Carnegie Mellon University gmanek@cs.cmu.edu J. Zico Kolter Department of Computer Science Carnegie Mellon University and Bosch Center for AI zkolter@cs.cmu.edu |
| Pseudocode | No | The paper contains mathematical equations and functional descriptions but no clearly labeled 'Pseudocode' or 'Algorithm' blocks. |
| Open Source Code | No | The paper does not contain any explicit statements or links indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | The paper mentions training data for n-link pendulums is 'produced by the symbolic algebra solver sympy, using simulation code adapted from [21]' and for video textures is 'a sequence of frames sampled from videos', but does not provide concrete access information (link, DOI, formal citation) for any publicly available dataset. |
| Dataset Splits | No | The paper does not provide specific dataset split information (percentages, sample counts, or citations to predefined splits) for training, validation, and testing. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., exact GPU/CPU models, memory amounts) used for running experiments. |
| Software Dependencies | No | The paper mentions 'Py Torch' and 'sympy' but does not provide specific version numbers for these or any other software dependencies needed to replicate the experiment. |
| Experiment Setup | Yes | Specifically, we let ˆf be defined by a 2-100-100-2 fully connected network, and V be a 2-100-100-1 ICNN, with both networks initialized via the default weights of Py Torch (the Kaiming uniform initialization [8]) and with the ICNN having it s U weights further put through a softplus unit to make them positive. We train the full system to minimize minimize e,d, ˆ f,V KL(N(µt, σ2 t I N(0, I)) + Ez d(zt) yt 2 2 + d(f(zt)) yt+1 2 2 (21) |