On Numerical Integration in Neural Ordinary Differential Equations

Authors: Aiqing Zhu, Pengzhan Jin, Beibei Zhu, Yifa Tang

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Several experiments are performed to numerically verify our theoretical analysis. and Experimental results support the theoretical analysis.
Researcher Affiliation Academia 1LSEC, ICMSEC, Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190, China 2School of Mathematical Sciences, University of Chinese Academy of Sciences, Beijing 100049, China 3School of Mathematical Sciences, Peking University, Beijing 100871, China 4School of Mathematics and Physics, University of Science and Technology Beijing, Beijing 100083, China.
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code Yes The code accompanying this paper are publicly available at https://github.com/Aiqing-Zhu/IMDE.
Open Datasets No Here, the training data is generated by a known system, T = {(xn, ϕT,f(xn))}N n=1, and we can calculate the corresponding IMDE. and The training dataset consists of grouped pairs of points with shared data step T, i.e., T = {(xn, ϕT (xn))}N n=1. No specific link or citation to a public dataset is provided, only that they used data generated from benchmark problems.
Dataset Splits No The training dataset consists of grouped pairs of points with shared data step T, i.e., T = {(xn, ϕT (xn))}N n=1. and On all experiments, the neural networks employed in Neural ODE are all fully connected networks with two hidden layers, each layer having 128 hidden units. The activation function is chosen to be tanh. We optimize the mean-squared-error loss ... for 3 * 10^5 epochs with Adam optimization. No explicit train/validation/test splits are specified.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory) were mentioned for running experiments.
Software Dependencies No The paper mentions 'Adam optimization' but does not provide specific version numbers for any software libraries, frameworks, or programming languages used.
Experiment Setup Yes On all experiments, the neural networks employed in Neural ODE are all fully connected networks with two hidden layers, each layer having 128 hidden units. The activation function is chosen to be tanh. We optimize the mean-squared-error loss ... for 3 * 10^5 epochs with Adam optimization (Kingma & Ba, 2015) where the learning rate is set to decay exponentially with linearly decreasing powers from 10^-2 to 10^-5.