Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

From Fourier to Koopman: Spectral Methods for Long-term Time Series Prediction

Authors: Henning Lange, Steven L. Brunton, J. Nathan Kutz

JMLR 2021 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We extensively benchmark these algorithms against other leading forecasting methods on a range of synthetic experiments as well as in the context of real-world power systems and fluid flows. 5. Experiments The code for the Fourierand Koopman forecasting algorithms is available at https:// github.com/helange23/from_fourier_to_koopman. 5.1 Synthetic experiments In the following, the efficacy of the algorithms is probed on synthetic tasks and compared to the Long-Short Term Memory (LSTM) (Hochreiter and Schmidhuber, 1997). 5.2 Natural data experiments
Researcher Affiliation Academia Henning Lange EMAIL Department of Applied Mathematics University of Washington Seattle, WA 98195-4322, USA Steven L. Brunton EMAIL Department of Mechanical Enigineering University of Washington Seattle, WA 98195-4322, USA J. Nathan Kutz EMAIL Department of Applied Mathematics University of Washington Seattle, WA 98195-4322, USA
Pseudocode Yes Algorithm 1 Learning a linear oscillator from data Randomly initialize ω and A while not converged do for i ∈ {1, .., n} do ωi ← arg minωi E(ωi; Mi) (using the FFT) while not converged do ωi ← ωi − α∂E/∂ωi Refine initial guess via GD end while A ← (ΩT Ω)−1ΩT X Optimize A using pseudo-inverse end for end while Algorithm 2 Learning a nonlinear oscillator from data Randomly initialize ω and Θ while not converged do for i ∈ {1, .., n} do Compute Si,t[k] Based on (13) and (14) E ← [0]TK for t ∈ {1, .., T} do for k ∈ {0, .., 2N} do E[tk] ← E[tk] + Si,t[k] Implements (15) end for end for ωi ← arg minω F[E](ω) while not converged do ωi ← ωi − α∂E/∂ωi Refine initial guess of ωi end while for a couple of iterations do Θ ← Θ − α∂E(ω,Θ)/∂Θ Gradient descent on Θ end for end for end while
Open Source Code Yes The code for the Fourierand Koopman forecasting algorithms is available at https://github.com/helange23/from_fourier_to_koopman.
Open Datasets Yes For this, a one-dimensional time series of past demand is extracted from the RE-Europe data set (Jensen and Pinson, 2017). Energy demand usually exhibits multiple scales, i.e. because energy consumption is usually higher during weekdays compared to weekends, energy consumption often exhibits weekly alongside daily and seasonal patterns. The data from the Kolmogorov 2D flow was taken from the experiments conducted in Tithof et al. (2017). We test the modified algorithm on a data set containing accelerometer readings of a mobile phone located in test subjects pockets (subject 28, right pocket) (Vajdi et al., 2019) and compare to the Fourier algorithm.
Dataset Splits Yes The data set at hand contains 3 years of data at an hourly rate, therefore 26.280 data points. The first 19.000 data points were used as training, the next 1000 for testing, and the last 6.280 for evaluation. The evaluation set was in turn divided into four equal in size consecutive subsets. For each experiment, the first 75% of the data was used for training whereas the remaining 25% are used for testing (temporal split).
Hardware Specification No The paper does not provide specific hardware details (like GPU/CPU models or processor types) used for running its experiments. It describes software implementations and model architectures, but not the underlying hardware.
Software Dependencies No For LSTMs and GRUs, the standard pytorch implementation is utilized and a parameter sweep over the number of layers as well as the number of hidden units per layer is performed totalling 100 different configurations. For the Box-Jenkins models the Hyndman-Khandakar (Hyndman et al., 2007) Auto ARIMA algorithm implemented in the R forecasting package is employed which uses a combination of unit root tests, minimization of the Akaike Information Criterion (Sakamoto et al., 1986) and Maximum Likelihood Estimation to obtain an ARIMA model. For Echo State Networks, the easyesn package was employed. The paper mentions several software components like PyTorch, R forecasting package, and easyesn package, but does not specify their version numbers.
Experiment Setup Yes For the LSTMs, a 5-layer 20-unit network comprising 15301 parameters was chosen. For long-term predictions, previously predicted snapshots are recursively fed back into the network. The LSTM is compared to the Koopman algorithm with a 3-layer decoder with roughly the same number of parameters (16384) and a single frequency. The Fourier algorithm was instantiated with 8 frequencies and therefore 24 parameters. For the Box-Jenkins models the Hyndman-Khandakar (Hyndman et al., 2007) Auto ARIMA algorithm implemented in the R forecasting package is employed which uses a combination of unit root tests, minimization of the Akaike Information Criterion (Sakamoto et al., 1986) and Maximum Likelihood Estimation to obtain an ARIMA model. For Clockwork-RNNs (Koutnik et al., 2014) (CW-RNN) is evaluated... For both models, a single layer CW-RNN was utilized and a parameter sweep over the number of clocks and states per clock was performed. For the Temporal Convolutional Network was instantiated with 13 layers (2^13 ~ 365 * 24), a kernel size of 3 and a hidden dimensionality of 25Bai et al. (2018). Temporal dependencies were attempted to be modeled by learning a function that produces the value of the next time step as a function of the previous 10.