Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].

Lightweight Online Adaption for Time Series Foundation Model Forecasts

Authors: Thomas L Lee, William Toner, Rajkarn Singh, Artjom Joosen, Martin Asenov

ICML 2025 | Venue PDF | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate the performance of ELF in conjunction with several recent FMs across a suite of standard time series datasets. In all of our experiments we find that using ELF improves performance. This work demonstrates how efficient usage of online feedback can be used to improve FM forecasts.
Researcher Affiliation Collaboration 1School of Informatics, University of Edinburgh, UK 2Work done while an intern at Huawei 3Huawei SIR Lab, Edinburgh Research Centre, UK. Correspondence to: Thomas L. Lee <EMAIL>, William Toner <EMAIL>.
Pseudocode Yes Algorithmic descriptions of these processes are presented in Appendix A.1.3. Algorithm 1 Predicting with ELF-Forecaster Algorithm 2 Fitting of ELF-Forecaster Algorithm 3 Complex-Valued Woodbury Update (Woodbury, 1950)
Open Source Code No The paper does not contain any explicit statements about code release, links to repositories, or mention of code in supplementary materials for the methodology described.
Open Datasets Yes The datasets we evaluate on are ETTh1, ETTh2, ETTm1, ETTm2, Weather, Traffic, ECL, Solar and US Weather (given in Zhou et al. (2021); Wu et al. (2021); Liu et al. (2022); Darlow et al. (2024)). These are standard datasets used widely in time series work (Ekambaram et al., 2024) and importantly, none of the FMs used these datasets for pretraining.
Dataset Splits Yes Specifically, we adopt the rolling window setting described in Section 3, moving through the time series and producing forecasts one time step at a time. We update ELF every M = 200 time steps using the online feedback provided by the newest 200 data points. Additionally, as we are in the rolling window setting we cannot normalise the data ahead of time, which is done in the original works proposing TAFAS, One Net and FSNet. Therefore, instead we use Rev In (Kim et al., 2021) with each method to standardise the data in our experiments. Finally, there are a few more specific details we need to mention for TAFAS. One being that we do not use Prediction Adjustment. This is because in our setting we assume the forecasts cannot be modified after they have been given, as in the real-world they are often used immediately for decision making. While the other is that for TTM we use the both gated calibration modules for all datasets bar Solar and Traffic. For the other FMs and for Solar and Traffic for TTM we only use the output gated calibration module due to GPU Memory constraints we had while performing the experiments.
Hardware Specification Yes We compute seconds per update step using 2 Intel(R) Xeon(R) Platinum 8168 CPUs and put in brackets the proportional speedup of updating using ELF versus the other online adaption methods. The results show that ELF performs the best in terms of both forecast accuracy (MASE) and compute efficiency (sec. per update step).
Software Dependencies No The paper does not explicitly list any specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions, or specific library versions).
Experiment Setup Yes For the ELF-Forecaster the parameter values are λ = 20 and α = 0.9. For the ELF-Weighter the hyperparameters are η = 0.5 for all weighters and B = 5 update steps. Second, at the start of deployment the ELF-Forecaster has insufficient data to give accurate forecasts therefore we have a warm-up period of 5 update steps where it is not used in the combined forecast.