Learning Differential Equations that are Easy to Solve

Authors: Jacob Kelly, Jesse Bettencourt, Matthew J. Johnson, David K. Duvenaud

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our approach by training substantially faster, while nearly as accurate, models in supervised classification, density estimation, and time-series modelling tasks.
Researcher Affiliation Collaboration Jacob Kelly University of Toronto, Vector Institute jkelly@cs.toronto.eduJesse Bettencourt University of Toronto, Vector Institute jessebett@cs.toronto.eduMatthew James Johnson Google Brain mattjj@google.comDavid Duvenaud University of Toronto, Vector Institute duvenaud@cs.toronto.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code available at: github.com/jacobjinkelly/easy-neural-ode
Open Datasets Yes We construct a model for MNIST classification: it takes in as input a flattened MNIST image... Physio Net Challenge 2012 dataset (Silva et al., 2012)... MINIBOONE tabular dataset from Papamakarios et al. (2017) and the MNIST image dataset (Le Cun et al., 2010).
Dataset Splits No The paper mentions 'training error' and 'test set' but does not explicitly describe training/validation/test splits, percentages, or specific sample counts for reproduction.
Hardware Specification No The paper does not specify any particular hardware (e.g., GPU or CPU models, memory details) used for running the experiments.
Software Dependencies No The paper mentions 'JAX Python library' and 'standard dopri5 Runge-Kutta 4(5) solver' but does not provide specific version numbers for these software components.
Experiment Setup Yes During training, we weigh this regularization term by a hyperparameter λ and add it to our original loss to get our regularized objective... The default tolerance of 1.4e-8 for both atol and rtol behaved well in all our experiments.