Learning Koopman Invariant Subspaces for Dynamic Mode Decomposition

Authors: Naoya Takeishi, Yoshinobu Kawahara, Takehisa Yairi

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this paper, we propose a fully data-driven method for Koopman spectral analysis based on the principle of learning Koopman invariant subspaces from observed data. ... Moreover, we introduce empirical performance of DMD based on the LKIS framework with several nonlinear dynamical systems and applications, which proves the feasibility of LKIS-based DMD as a fully data-driven method for modal decomposition via the Koopman operator. 5 Numerical examples 6 Applications
Researcher Affiliation Academia Naoya Takeishi , Yoshinobu Kawahara , , Takehisa Yairi Department of Aeronautics and Astronautics, The University of Tokyo The Institute of Scientific and Industrial Research, Osaka University RIKEN Center for Advanced Intelligence Project {takeishi,yairi}@ailab.t.u-tokyo.ac.jp, ykawahara@sanken.osaka-u.ac.jp
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets Yes We applied LKIS-DMD (n = 10) to a time-series generated by a far-infrared laser, which was obtained from the Santa Fe Time Series Competition Data [50].
Dataset Splits Yes We simulated 25,000 steps for each attractor and used the first 10,000 steps for training, the next 5,000 steps for validation, and the last 10,000 steps for testing prediction accuracy.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts, or cloud instance types) used for running its experiments.
Software Dependencies No The paper mentions using neural networks, MLPs, ReLU, batch normalization, and first-order gradient descent, but does not provide specific software names with version numbers (e.g., PyTorch 1.9) needed to replicate the experiment.
Experiment Setup No We model g and h using multi-layer perceptrons (MLPs) with a parametric Re LU activation function [30]. Here, the sizes of the hidden layer of MLPs are defined by the arithmetic means of the sizes of the input and output layers of the MLPs. Thus, the remaining tunable hyperparameters are k (maximum delay of φ), p (dimensionality of x), and n (dimensionality of g). ... In the numerical experiments described in Sections 5 and 6, we performed optimization using first-order gradient descent. To stabilize optimization, batch normalization [31] was imposed on the inputs of hidden layers. ... all of whose hyperparameters were tuned using the validation set. The paper describes some architectural choices and the tuning process but does not provide concrete numerical values for all hyperparameters (e.g., learning rate, batch size, specific values for k, p, n chosen after tuning) required for reproduction.