ODE2VAE: Deep generative second order ODEs with Bayesian neural networks

Authors: Cagatay Yildiz, Markus Heinonen, Harri Lahdesmaki

NeurIPS 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate our approach on motion capture, image rotation and bouncing balls datasets. We achieve state-of-the-art performance in long term motion prediction and imputation tasks. 4 Experiments
Researcher Affiliation Academia C a gatay Yıldız1, Markus Heinonen1,2, Harri L ahdesm aki1 Department of Computer Science Aalto University, Finland, FI-00076 {cagatay.yildiz, markus.o.heinonen, harri.lahdesmaki}@aalto.fi
Pseudocode No The paper describes the model and methods using text and mathematical equations, but it does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes An implementation of our experiments and generated video sequences are provided at https://github.com/cagatayyildiz/ODE2VAE.
Open Datasets Yes We illustrate the performance of our model on three different datasets: human motion capture (see the acknowledgements), rotating MNIST (Casale et al., 2018) and bouncing balls (Sutskever et al., 2009).
Dataset Splits Yes The first two-third of each sequence is reserved for training and validation, and the rest is used for testing. Second dataset consists of 23 walking sequences of subject 35 (Gan et al., 2015), which is partitioned into 16 training, 3 validation and 4 test sequences.
Hardware Specification No The paper mentions 'computer resources within the Aalto University School of Science Science-IT project' in the acknowledgements, but it does not provide specific hardware details such as GPU/CPU models or memory specifications used for the experiments.
Software Dependencies No The paper mentions 'Tensorflow (Abadi et al., 2016)', 'Adam optimizer (Kingma and Ba, 2014)', and 'Tensorflow s own odeint fixed function' but does not specify version numbers for these software components.
Experiment Setup Yes Encoder, differential function and the decoder parameters are jointly optimized with Adam optimizer (Kingma and Ba, 2014) with learning rate 0.001. Neural network hyperparameters, chosen by cross-validation, are detailed in the supplementary material.