SNODE: Spectral Discretization of Neural ODEs for System Identification

Authors: Alessio Quaglino, Marco Gallieri, Jonathan Masci, Jan Koutník

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental comparison to standard methods, such as backpropagation through explicit solvers and the adjoint technique (Chen et al., 2018), on training surrogate models of small and medium-scale dynamical systems shows that it is at least one order of magnitude faster at reaching a comparable value of the loss function.
Researcher Affiliation Industry NNAISENSE, Lugano, Switzerland {alessio, marco, jonathan, jan}@nnaisense.com
Pseudocode Yes Algorithm 1 δ-SNODE training... Algorithm 2 α-SNODE training
Open Source Code No The paper does not provide an explicit statement or link for open-source code for the described methodology.
Open Datasets No The paper describes generating synthetic data for its experiments (e.g., vehicle dynamics, multi-agent simulation) but does not provide access information or refer to a publicly available dataset for training.
Dataset Splits No The paper refers to 'cross-validation' conceptually but does not provide specific train/validation/test dataset splits (e.g., percentages, sample counts, or citations to predefined splits) for the experiments conducted.
Hardware Specification Yes Experiments were performed on a i9 Apple laptop with 32GB of RAM.
Software Dependencies No The paper mentions software like ADAM, SGD, and PyTorch for automatic differentiation, but does not provide specific version numbers for these software dependencies.
Experiment Setup Yes Time horizon T = 10s and batch size of 100 were used. Learning rates were set to 10 2 for ADAM (for all methods) and 10 3 for SGD (for α-SNODE). For the α-SNODE method, γ = 3 and 10 iterations were used for the SGD and ADAM algorithms at each epoch, as outlined in Algorithm 2.