Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Neural ODE Processes

Authors: Alexander Norcliffe, Cristian Bodnar, Ben Day, Jacob Moss, Pietro LiΓ²

ICLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To test the proposed advantages of NDPs we carried out various experiments on time series data. For the low-dimensional experiments in Sections 4.1 and 4.2, we use an MLP architecture for the encoder and decoder. For the high-dimensional experiments in Section 4.3, we use a convolutional architecture for both. We train the models using RMSprop (Tieleman & Hinton, 2012) with learning rate 1 Γ— 10βˆ’3. Additional model and task details can be found in Appendices F and G, respectively.
Researcher Affiliation Academia Alexander Norcliffe Department of Computer Science University College London London, United Kingdom EMAIL Cristian Bodnar , Ben Day , Jacob Moss & Pietro Li o Department of Computer Science University of Cambridge Cambridge, United Kingdom EMAIL
Pseudocode Yes Algorithm 1: Learning and Inference in Neural ODE Processes
Open Source Code Yes Our code and datasets are available at https://github.com/crisbodnar/ndp.
Open Datasets Yes Our code and datasets are available at https://github.com/crisbodnar/ndp. ... To generate the distribution over functions, we sample these parameters from a uniform distribution over their respective ranges. We use 490 time-series for training and evaluate on 10 separate test time-series. Each series contains 100 points.
Dataset Splits Yes Overall, we generate a dataset with 1, 000 training time-series, 100 validation time-series and 200 test time-series, each using disjoint combinations of different calligraphic styles and dynamics.
Hardware Specification Yes The experiments were run on an Nvidia Titan XP.
Software Dependencies No The paper mentions 'torchdiffeq library' and 'Pytorch' but does not provide specific version numbers for software dependencies.
Experiment Setup Yes We train the models using RMSprop (Tieleman & Hinton, 2012) with learning rate 1 Γ— 10βˆ’3. Additional model and task details can be found in Appendices F and G, respectively.