Latent Time Neural Ordinary Differential Equations

Authors: Srinivas Anumasa, P. K. Srijith6010-6018

AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct experiments to evaluate the uncertainty and robustness modelling capabilities of the proposed approaches2, LT-NODE and ALT-NODE using synthetic and real-world data sets. The approaches are compared against standard NODE (Chen et al. 2018) and baselines which were recently proposed to model uncertainty in the NODE models, such as NODE-GP (Anumasa and Srijith 2021a) and SDE-Net (Kong, Sun, and Zhang 2020).
Researcher Affiliation Academia Srinivas Anumasa, P.K. Srijith Indian Institute of Technology Hyderabad, India cs16resch11004@iith.ac.in,srijith@cse.iith.ac.in
Pseudocode Yes Algorithm 1: Forward pass in LT-NODE, computing predictive probability for datapoint x.
Open Source Code Yes 2https://github.com/srinivas-quan/LTNODE
Open Datasets Yes We demonstrate the superior uncertainty modelling capability of LT-NODE and ALT-NODE under different experimental setups on synthetic and several real-world image classification data sets such as CIFAR10 (Krizhevsky and Hinton 2009), SVHN (Netzer et al. 2011), MNIST (Le Cun et 1998) and Fashion-MNIST (Xiao, Rasul, and Vollgraf 2017).
Dataset Splits No The paper mentions training data and test data, but does not explicitly state the percentages or counts for train/validation/test splits, nor does it refer to specific predefined splits.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper does not specify particular software dependencies with version numbers, such as specific deep learning frameworks or libraries.
Experiment Setup No The paper mentions that 'All the models follow the same architecture as standard NODE. Additional networks are required for SDE-Net for diffusion and ALT-NODE for inference, both using 3 convolution layers followed by a fully connected layer.' and that 'adaptive numerical technique such as Dopri5' is used. However, it does not provide specific hyperparameters like learning rate, batch size, or number of epochs, which are essential for reproducibility.