Zero-Shot Transfer of Neural ODEs
Authors: Tyler Ingebrand, Adam Thorpe, Ufuk Topcu
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate state-of-the-art system modeling accuracy for two Mu Jo Co robot environments and show that the learned models can be used for more efficient MPC control of a quadrotor. (...) We demonstrate the effectiveness of our approach for predicting and controlling dynamical systems through several numerical experiments. |
| Researcher Affiliation | Academia | Tyler Ingebrand, Adam J. Thorpe, Ufuk Topcu University of Texas at Austin Austin, TX 78712 |
| Pseudocode | Yes | Algorithm 1 Training Function Encoders with Neural ODE Basis Functions (...) Algorithm 2 The Residuals Method |
| Open Source Code | Yes | The source code is available at https://github.com/tyler-ingebrand/Neural ODEFunction Encoder. (...) We provide code as a zip file in the initial submission. A link to a github repository will be provided in the final version. |
| Open Datasets | Yes | We evaluate the performance of our proposed approach on the Half-Cheetah and Ant environments [28] (...) We use a simulated quadrotor system using Py Bullet [31] |
| Dataset Splits | Yes | Evaluations are done on a holdout set collected through the same means. (...) Evaluation is over 5 seeds, shaded regions show the first and third quartiles around the median. (...) Shaded region is 1st and 3rd quartiles over 200 trajectories (left) and over 5 trajectories (middle, right). |
| Hardware Specification | Yes | All experiments use an Intel 9th Generation i9 CPU and a Nvidia 2060 GPU with 6GB of memory. |
| Software Dependencies | No | The paper mentions software components like 'ADAM optimizer' and 'RK4 integrator' but does not specify version numbers for these or other key software libraries and dependencies. |
| Experiment Setup | Yes | We use an ADAM optimizer with a learning rate of 1e 3, and gradient clipping with a max norm of 1. NODE baselines uses 4 hidden layers of size 512, while FE + NODE baselines uses 4 hidden layers of size 51 for each basis function. All baselines train on 50 functions per gradient update via gradient accumulation. States are normalized to have 0 mean and unit variance. |