Compositional Modeling of Nonlinear Dynamical Systems with ODE-based Random Features

Authors: Thomas McDonald, Mauricio Álvarez

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide evidence that our model is capable of capturing highly nonlinear behaviour in real-world multivariate time series data. In addition, we find that our approach achieves comparable performance to a number of other probabilistic models on benchmark regression tasks. 5 Experiments
Researcher Affiliation Academia Thomas M. Mc Donald Department of Computer Science University of Sheffield tmmcdonald1@sheffield.ac.uk Mauricio A. Álvarez Department of Computer Science University of Sheffield mauricio.alvarez@sheffield.ac.uk
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code Yes Our code is publicly available in the repository: https://github.com/tomcdonald/Deep-LFM. The code was also included within the supplemental material at the time of review.
Open Datasets Yes We evaluate its performance on a subset of the CHARIS dataset (ODC-BY 1.0 License) [Kim et al., 2016], which can be found on the Physio Net data repository [Goldberger et al., 2000]. Finally, we also evaluated the performance of the model on two regression datasets from the UCI Machine Learning Repository [Dua and Graff, 2017], Powerplant and Protein
Dataset Splits Yes we focus here on the more challenging task of extrapolating beyond the training input-space by training the aforementioned models on the first 700 observations and withholding the remaining 300 as a test set. Progression of validation set metrics on the UCI benchmarks, averaged over three folds.
Hardware Specification Yes All of the experimental results in this section were obtained using a single node of a cluster, consisting of a 40 core Intel Xeon Gold 6138 CPU and a NVIDIA Tesla V100 SXM2 GPU with 32GB of RAM.
Software Dependencies No The paper mentions 'pure Py Torch' but does not specify its version number or any other software dependencies with version numbers.
Experiment Setup Yes Unless otherwise specified, all models in this section were implemented in pure Py Torch, trained using the Adam W optimizer with a learning rate of 0.01 and a batch size of 1000. The DLFMs and DGPs with random feature expansions [Cutajar et al., 2017] tested all utilised a single hidden layer of dimensionality DF (ℓ) = 3, 100 Monte Carlo samples and 100 random features per layer.