On the Forward Invariance of Neural ODEs
Authors: Wei Xiao, Tsun-Hsuan Wang, Ramin Hasani, Mathias Lechner, Yutong Ban, Chuang Gan, Daniela Rus
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We test our method on a series of representation learning tasks, including modeling physical dynamics and convexity portraits, as well as safe collision avoidance for autonomous vehicles. |
| Researcher Affiliation | Collaboration | 1Computer Science and Artificial Intelligence Lab, Massachusetts Institute of Technology, Cambridge, MA, USA. 2MIT-IBM Watson AI Lab. Correspondence to: Wei Xiao <weixy@mit.edu>. |
| Pseudocode | Yes | Algorithm 1 Invariance Propagation to Parameters |
| Open Source Code | Yes | Videos and code are available on the website: https://weixy21.github.io/invariance/. |
| Open Datasets | Yes | The two datasets consist of trajectories of the Half Cheetah-v2 and Walker2d-v2 3D robot systems (Brockman et al., 2016) generated by the Mujoco physics engine (Todorov et al., 2012). |
| Dataset Splits | No | The paper mentions "training data set" in various sections (F.1, F.2, F.3, F.4) and describes how data was generated or sampled for training (e.g., "sampled 1000 data points within the time interval [0,25] as the training data set"). However, it does not explicitly provide information about the specific proportions or counts for training, validation, and test splits, nor does it refer to predefined standard splits that include validation. |
| Hardware Specification | Yes | The training time is about 2 hours on an RTX3090 GPU. |
| Software Dependencies | No | The paper mentions software components like "Mujoco physics engine" and optimization algorithms like "RMSprop optimizer", but it does not specify any software libraries or dependencies with version numbers required to replicate the experiment (e.g., Python version, PyTorch/TensorFlow version, specific solver versions). |
| Experiment Setup | Yes | The training epoch is 500, and the training batch size is 20 with a batch sequence time of 10. We use RMSprop optimizer with learning rate 1e 3. |