Latent Neural ODEs with Sparse Bayesian Multiple Shooting
Authors: Valerii Iakovlev, Cagatay Yildiz, Markus Heinonen, Harri Lähdesmäki
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate efficient and stable training, and state-of-the-art performance on multiple large-scale benchmark datasets. |
| Researcher Affiliation | Academia | Aalto University, Finland. University of T ubingen, Germany. |
| Pseudocode | No | The paper describes computational steps (e.g., in Section C) and data generation procedures (e.g., in Appendix D) as numbered lists within paragraphs, but it does not present them as formal pseudocode blocks or clearly labeled "Algorithm" sections. |
| Open Source Code | Yes | Code: https://github.com/yakovlev31/msvi |
| Open Datasets | Yes | The datasets and data generation scripts can be downloaded at https://github.com/yakovlev31/msvi. |
| Dataset Splits | Yes | The training/validation/test sets contain 400/50/50 trajectories. |
| Hardware Specification | Yes | Training is done on a single NVIDIA Tesla V100 GPU. |
| Software Dependencies | Yes | To simulate the dynamics we use an ODE solver from torchdiffeq package (Chen et al., 2018) (dopri5 with rtol = atol = 10 5). ...see Py Torch 1.12 (Paszke et al., 2019) documentation for details. |
| Experiment Setup | Yes | We train our model for 300000 iterations using Adam optimizer (Kingma & Ba, 2015) and learning rate exponentially decreasing from 3 10 4 to 10 5. For PENDULUM, RMNIST, and BOUNCING BALLS datasets the batch size is set to 16, 16, and 64, respectively, while the block size is set to 1, 1, and 5, respectively. |