Neural ODE Processes

Authors: Alexander Norcliffe, Cristian Bodnar, Ben Day, Jacob Moss, Pietro Liò

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To test the proposed advantages of NDPs we carried out various experiments on time series data. For the low-dimensional experiments in Sections 4.1 and 4.2, we use an MLP architecture for the encoder and decoder. For the high-dimensional experiments in Section 4.3, we use a convolutional architecture for both. We train the models using RMSprop (Tieleman & Hinton, 2012) with learning rate 1 × 10−3. Additional model and task details can be found in Appendices F and G, respectively.
Researcher Affiliation Academia Alexander Norcliffe Department of Computer Science University College London London, United Kingdom ucabino@ucl.ac.uk Cristian Bodnar , Ben Day , Jacob Moss & Pietro Li o Department of Computer Science University of Cambridge Cambridge, United Kingdom {cb2015, bjd39, jm2311, pl219}@cam.ac.uk
Pseudocode Yes Algorithm 1: Learning and Inference in Neural ODE Processes
Open Source Code Yes Our code and datasets are available at https://github.com/crisbodnar/ndp.
Open Datasets Yes Our code and datasets are available at https://github.com/crisbodnar/ndp. ... To generate the distribution over functions, we sample these parameters from a uniform distribution over their respective ranges. We use 490 time-series for training and evaluate on 10 separate test time-series. Each series contains 100 points.
Dataset Splits Yes Overall, we generate a dataset with 1, 000 training time-series, 100 validation time-series and 200 test time-series, each using disjoint combinations of different calligraphic styles and dynamics.
Hardware Specification Yes The experiments were run on an Nvidia Titan XP.
Software Dependencies No The paper mentions 'torchdiffeq library' and 'Pytorch' but does not provide specific version numbers for software dependencies.
Experiment Setup Yes We train the models using RMSprop (Tieleman & Hinton, 2012) with learning rate 1 × 10−3. Additional model and task details can be found in Appendices F and G, respectively.