Augmented Neural ODEs

Authors: Emilien Dupont, Arnaud Doucet, Yee Whye Teh

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We first compare the performance of Res Nets and NODEs on simple regression tasks. To provide a baseline, we not only train on g(x) but also on data which can be made linearly separable without altering the topology of the space (implying that Neural ODEs should be able to easily learn this function). To ensure a fair comparison, we run large hyperparameter searches for each model and repeat each experiment 20 times to ensure results are meaningful across initializations (see appendix for details). We show results for experiments with d = 1 and d = 2 in Fig. 5.
Researcher Affiliation Academia Emilien Dupont University of Oxford dupont@stats.ox.ac.uk Arnaud Doucet University of Oxford doucet@stats.ox.ac.uk Yee Whye Teh University of Oxford y.w.teh@stats.ox.ac.uk
Pseudocode No No pseudocode or algorithm blocks found.
Open Source Code Yes (the code to reproduce all experiments in this paper is available at https://github.com/Emilien Dupont/augmented-neural-odes).
Open Datasets Yes We test this behavior on image data by training models on MNIST, CIFAR10, SVHN and 200 classes of 64 64 Image Net.
Dataset Splits No We train both NODEs and ANODEs on the training set and plot the evolution of the validation loss during training in Fig. 9.
Hardware Specification No No specific hardware details (like GPU/CPU models or cloud instance types) are provided for running experiments.
Software Dependencies No No specific software dependencies with version numbers are mentioned in the provided text.
Experiment Setup No The paper mentions 'large hyperparameter searches' and states 'Full training and architecture details can be found in the appendix', but these specific details are not provided in the main body of the paper.