Koopa: Learning Non-stationary Time Series Dynamics with Koopman Predictors

Authors: Yong Liu, Chenyu Li, Jianmin Wang, Mingsheng Long

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments to evaluate the performance and efficiency of Koopa. For multivariate forecasting, we include six real-world benchmarks used in Autoformer [48]: ECL (UCI), ETT [53], Exchange [22], ILI (CDC), Traffic (Pe MS), and Weather (Wetterstation). For univariate forecasting, we evaluate the performance on the well-acknowledged M4 dataset [39]
Researcher Affiliation Academia Yong Liu , Chenyu Li , Jianmin Wang, Mingsheng Long B School of Software, BNRist, Tsinghua University, China {liuyong21,lichenyu20}@mails.tsinghua.edu.cn, {jimwang,mingsheng}@tsinghua.edu.cn
Pseudocode Yes Algorithm 1 Koopa Operator Adaptation. and Algorithm 2 Accelerated Koopa Operator Adaptation.
Open Source Code Yes Code is available at this repository: https://github.com/thuml/Koopa.
Open Datasets Yes For multivariate forecasting, we include six real-world benchmarks used in Autoformer [48]: ECL (UCI), ETT [53], Exchange [22], ILI (CDC), Traffic (Pe MS), and Weather (Wetterstation). For univariate forecasting, we evaluate the performance on the well-acknowledged M4 dataset [39]
Dataset Splits Yes And we follow the data processing and split ratio used in Times Net [47].
Hardware Specification Yes Experiments are implemented in Py Torch [34] and conducted on NVIDIA TITAN RTX 24GB GPUs.
Software Dependencies No The paper mentions 'implemented in Py Torch' but does not specify the version number of PyTorch or any other key software dependencies with their versions.
Experiment Setup Yes Koopa is trained with L2 loss and optimized by ADAM [17] with an initial learning rate of 0.001 and batch size set to 32. The training process is early stopped within 10 epochs.