Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Non-asymptotic and Accurate Learning of Nonlinear Dynamical Systems
Authors: Yahya Sattar, Samet Oymak
JMLR 2022 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We verify our theoretical results through various numerical experiments. Keywords: nonlinear dynamical systems, stability, uniform convergence, learning from single trajectory |
| Researcher Affiliation | Academia | Yahya Sattar EMAIL Department of Electrical and Computer Engineering University of California Riverside, CA 92521, USA Samet Oymak EMAIL Department of Electrical and Computer Engineering University of California Riverside, CA 92521, USA |
| Pseudocode | No | The paper describes the gradient descent algorithm with the iterate θτ+1 = θτ η L(θτ) in Equation (2.4) but does not present it within a clearly labeled pseudocode or algorithm block with structured steps. |
| Open Source Code | No | The paper does not contain any explicit statement about releasing source code, nor does it provide links to a code repository in the main text, footnotes, or appendices. |
| Open Datasets | No | For our experiments, we choose unstable nonlinear dynamical systems (ρ(A) > 1) governed by nonlinear state equation ht+1 = φ(Aht + But) + wt with state dimension n = 80 and input dimension p = 50. A is generated with N(0,1) entries and scaled to have its largest 10 eigenvalues greater than 1. B is generated with i.i.d. N(0,1/n) entries. For nonlinearity, we use either softplus (φ(x) = ln(1 + ex)) or leaky-Re LU (max(x,λx), with leakage 0 λ 1) activations. ... Lastly, zt i.i.d. N(0,Ip) and wt i.i.d. N(0,σ2In). The paper uses synthetically generated data based on specified distributions and parameters, rather than a pre-existing, publicly available dataset. |
| Dataset Splits | No | The trajectory length is set to T = 2000 and the noise variance is set to σ2 = 0.01. In Figure 2a, we plot the normalized estimation error of A over different values of λ. We observe that, decreasing nonlinearity leads to faster convergence of gradient descent. The paper describes using a single finite trajectory and repeating experiments, but does not specify formal training/testing/validation splits for a dataset. |
| Hardware Specification | No | The paper describes the experimental setup in terms of model parameters, noise levels, and non-linearity types, but it does not specify any particular hardware (e.g., CPU, GPU models, or cloud computing resources) used for running the experiments. |
| Software Dependencies | No | The paper mentions running 'gradient descent' and using 'softplus' or 'leaky-Re LU activations' but does not specify any software libraries, frameworks, or their version numbers used for implementation. |
| Experiment Setup | Yes | For our experiments, we choose unstable nonlinear dynamical systems (ρ(A) > 1) governed by nonlinear state equation ht+1 = φ(Aht + But) + wt with state dimension n = 80 and input dimension p = 50. ... We run gradient descent with fixed learning rate η = 0.1/T, where T denotes the trajectory length. ... The trajectory length is set to T = 2000 and the noise variance is set to σ2 = 0.01. ... Each experiment is repeated 20 times... |