Continuous-time identification of dynamic state-space models by deep subspace encoding

Authors: Gerben I. Beintema, Maarten Schoukens, Roland Tóth

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental This paper presents a novel estimation method that addresses all these aspects and that can obtain state-of-the-art results on multiple benchmarks with compact fully connected neural networks capturing the CT dynamics. (from Abstract) and 5 EXPERIMENTS section.
Researcher Affiliation Academia Department of Electrical Engineering, Eindhoven University of Technology, The Netherlands {g.i.beintema,m.schoukens,r.toth}@tue.nl Also associated with, Systems and Control Laboratory, Institute for Computer Science and Control, Budapest, Hungary.
Pseudocode No The paper describes procedures in text and mathematical formulations but does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures).
Open Source Code Yes The code used for both SUBNET and neural ODE experiments is available at https://github.com/Gerben Beintema/CT-subnet
Open Datasets Yes Datasets: (i) The CCT dataset is described in Schoukens & No el (2017); Schoukens et al. (2017) and is available for download at https://data.4tu. nl/articles/dataset/Cascaded_Tanks_Benchmark_Combining_ Soft_and_Hard_Nonlinearities/12960104, (ii) the CED dataset is described in Wigren & Schoukens (2017) and is available for download at http://www.it.uu.se/research/publications/reports/2017-024/, (iii) the EMPS datset is described in (Janot et al., 2019) and is available for download at https://www.nonlinearbenchmark.org/benchmarks/emps.
Dataset Splits Yes The first dataset is used for training, the first 512 samples of the second set are used for validation (used only for early stopping) and the entire second set for testing. (CCT) The first 300 samples are used for training and the other 200 samples are for testing, of those samples the first 100 samples are also used for validation with both datasets. (CED) we utilize 17885 samples of the first set for training and the last 6956 samples are used for validation while the entire second set is used for testing. (EMPS)
Hardware Specification No The paper states 'It takes about 15 minutes to estimate a single CT SUBNET model and 2 hours for a single neural ODE model on a consumer laptop.' This is a general description and does not provide specific hardware details like GPU/CPU models, processor types, or memory amounts.
Software Dependencies No The paper mentions software like 'Adam optimizer' and 'RK4 step' but does not provide specific version numbers for any software dependencies or libraries.
Experiment Setup Yes Using the SUBNET method, we estimate models where the three functions hθ, fθ and ψθ are implemented as 2 hidden layer neural networks with 64 hidden nodes per layer, tanh activation and a linear bypass from the input to the output for CCT and CED and 1 hidden layer with 30 hidden nodes for EMPS. As an ODE solver, we use a single RK4 step between samples... The training is done by using the Adam optimizer with default settings (Kingma & Ba, 2015) with a batch size of 32 for CED, 64 for CCT and 1024 for EMPS