Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

SKOLR: Structured Koopman Operator Linear RNN for Time-Series Forecasting

Authors: Yitian Zhang, Liheng Ma, Antonios Valkanas, Boris N. Oreshkin, Mark Coates

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments on various forecasting benchmarks and dynamical systems show that this streamlined, Koopman-theorybased design delivers exceptional performance. Our code is available at: https://github. com/networkslab/SKOLR. 4. Experiments
Researcher Affiliation Collaboration 1Department of Electrical and Computer Engineering, Mc Gill University, Montreal, Canada 2Mila Quebec Artificial Intelligence Institute, Montreal, Canada 3ILLS International Laboratory on Learning Systems, Montreal, Canada 4Amazon Science.
Pseudocode No The paper describes the methodology using mathematical equations and descriptive text, but it does not include a distinct block or figure explicitly labeled as "Pseudocode" or "Algorithm".
Open Source Code Yes Our code is available at: https://github. com/networkslab/SKOLR.
Open Datasets Yes We evaluate SKOLR on widely-used public benchmark datasets. For long-term forecasting, we use Weather, Traffic, Electricity, ILI and four ETT datasets (ETTh1, ETTh2, ETTm1, ETTm2). We assess short-term performance on M4 dataset (Makridakis et al., 2020).
Dataset Splits Yes Following the standard pipelines, the dataset is split into training, validation, and test sets with the ratio of 6:2:2 for four ETT datasets and 7:1:2 for the remaining datasets.
Hardware Specification Yes Figure 5: Model comparison on error and training epoch time on P100 GPU. Table 12: Model Efficiency and Performance Comparison for Different Datasets with T = 96. Parameters (Params) are measured in millions (M), GPU memory (GPU) in Mi B, computation time per epoch in seconds (s) on NVIDIA V100 GPU with batch size 32.
Software Dependencies No We implement SKOLR in Py Torch, applying instance-normalization and denormalization (Kim et al., 2022) to inputs and predictions respectively. The LRU is trained using the Adam W optimizer with no weight decay applied to the recurrent parameters.
Experiment Setup Yes Following Koopa (Liu et al., 2023), we set the lookback window length L = 2T for prediction horizon T {48, 96, 144, 192} for all datasets, except ILI, for which we use T {24, 36, 48, 60}. We train using Adam W optimizer (Loshchilov, 2017) with learning rate 1 e 4 and weight decay 5 e 4, using batch size 32 across all datasets. Complete hyperparameter configurations are detailed in Table 7.