Neural Flows: Efficient Alternative to Neural ODEs

Authors: Marin Biloš, Johanna Sommer, Syama Sundar Rangapuram, Tim Januschowski, Stephan Günnemann

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Apart from computational efficiency, we also provide empirical evidence of favorable generalization performance via applications in time series modeling, forecasting, and density estimation. (Abstract) and In this section we show that flow-based models can match or outperform ODEs at a smaller computation cost, both in latent variable time series modeling, as well as TPPs and time-dependent density estimation. (Section 4)
Researcher Affiliation Collaboration Marin Biloš1 , Johanna Sommer1, Syama Sundar Rangapuram2, Tim Januschowski2, Stephan Günnemann1 1Technical University of Munich, 2AWS AI Labs, Germany
Pseudocode No The paper defines models using mathematical equations but does not include structured pseudocode or algorithm blocks with specific labels.
Open Source Code Yes All datasets are publicly available, we include the download links and release the code that reproduces the results.2 https://www.daml.in.tum.de/neural-flows
Open Datasets Yes All datasets are publicly available, we include the download links and release the code that reproduces the results.2 (Section 4). Specific datasets mentioned with citations include Activity, Physionet, and Mu Jo Co (referencing [69], [73], [74] implicitly), MIMIC-III [35], MIMIC-IV [25, 36], Reddit, MOOC, and Wiki page edits [44], Bikes, Covid cases [77], and earthquake events [78].
Dataset Splits Yes In all experiments we split the data into train, validation and test set; train with early stopping and report results on test set.
Hardware Specification Yes For training we use two different machines, one with 3.4GHz processor and 32GB RAM and another with 61GB RAM and NVIDIA Tesla V100 GPU 16GB [52].
Software Dependencies No The paper mentions using 'Adam optimizer [39]' and specific solver types like 'adaptive solvers [18]' and 'Euler', but does not provide specific version numbers for any software libraries or frameworks used (e.g., PyTorch, TensorFlow, SciPy).
Experiment Setup Yes We use Adam optimizer [39]. (Section 4) and We use the same number of hidden layers and the same size of latent states for both the neural ODE, coupling flow and Res Net flow, giving approximately the same number of trainable parameters. ODE models use either Euler or adaptive solvers and we report the best results. (Section 4, Smoothing approach)