Learning Neural Event Functions for Ordinary Differential Equations

Authors: Ricky T. Q. Chen, Brandon Amos, Maximilian Nickel

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our approach in modeling hybrid discreteand continuoussystems such as switching dynamical systems and collision in multi-body systems, and we propose simulation-based training of point processes with applications in discrete control. ... Table 1: Continuous-time switching linear dynamical system. ... Table 2: Test results on the physics prediction task.
Researcher Affiliation Collaboration Ricky T. Q. Chen University of Toronto; Vector Institute rtqichen@cs.toronto.edu Brandon Amos, Maximilian Nickel Facebook AI Research {bda,maxn}@fb.com
Pseudocode Yes Algorithm 1 The Neural Event ODE. In addition to an ODE, we also model a variable number of possible event locations and how each event affects the system.
Open Source Code Yes We implemented our method in the torchdiffeq (Chen, 2018) library written in the Py Torch (Paszke et al., 2019a) framework, allowing us to make use of GPU-enabled ODE solvers.
Open Datasets Yes We constructed a fan-shaped system, similar to a baseball track. ... We created a data set with 100 short trajectories for training and 25 longer trajectories as validation and test sets. This was done by sampling a trajectory from the ground-truth system and adding random noise. ... We used the Pymunk/Chipmunk (Blomqvist, 2011; Lembcke, 2007) library to simulate two balls of radius 0.5 in a [0, 5]2 box. ... The initial position is randomly sampled and the initial velocity is zero. We then simulated for 100 steps. We sampled 1000 initial positions for training and 25 initial positions each for validation and test.
Dataset Splits Yes We created a data set with 100 short trajectories for training and 25 longer trajectories as validation and test sets. ... We sampled 1000 initial positions for training and 25 initial positions each for validation and test.
Hardware Specification No The paper does not specify the hardware used for running experiments beyond mentioning "GPU-enabled ODE solvers".
Software Dependencies Yes ACKNOWLEDGEMENTS ... Py Torch (Paszke et al., 2019b), torchdiffeq (Chen, 2018), higher (Grefenstette et al., 2019), Hydra (Yadan, 2019), Jupyter (Kluyver et al., 2016), Matplotlib (Hunter, 2007), seaborn (Waskom et al., 2018), numpy (Oliphant, 2006; Van Der Walt et al., 2011), pandas (Mc Kinney, 2012), and Sci Py (Jones et al., 2014). ... We used the Pymunk/Chipmunk (Blomqvist, 2011; Lembcke, 2007) library to simulate two balls of radius 0.5 in a [0, 5]2 box.
Experiment Setup Yes We used Adam with the default learning rate of 0.001 and a cosine learning decay. All models were trained with a batch size of 1 for 25000 iterations. ... We used relative and absolute tolerances of 1E-8 for solving the ODE. ... We used a mean squared loss on the position of the two balls. For optimization, we used Adam with learning rate 0.0005 for the event function and 0.0001 for the instantaneous update. We also clipped gradient norms at 5.0. All models were trained for 1000000 iterations, where each iteration used a subsequence of 25 steps as the target. ... All models were trained using Adam with learning rate 0.0003.