Learning Fast and Slow for Online Time Series Forecasting

Authors: Quang Pham, Chenghao Liu, Doyen Sahoo, Steven Hoi

ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on real and synthetic datasets validate FSNet s efficacy and robustness to both new and recurring patterns.
Researcher Affiliation Collaboration Quang Pham , Chenghao Liu , Doyen Sahoo , Steven C.H. Hoi , Salesforce Research Asia Institute for Infocomm Research (I2R), Agency for Science, Technology and Research (A*STAR), 1 Fusionopolis Way, #21-01, Connexis South Tower, Singapore 138632, Republic of Singapore School of Computing and Information Systems, Singapore Management University
Pseudocode Yes We provide FSNet s pseudo code in Appendix C.2.
Open Source Code Yes Our code is publicly available at: https://github.com/salesforce/fsnet/.
Open Datasets Yes ETT1 (Zhou et al., 2021) records the target value of oil temperature and 6 power load features over a period of two years. We consider the ETTh2 and ETTm1 benchmarks where the observations are recorded hourly and in 15-minutes intervals respectively. ECL (Electricty Consuming Load)2 collects the electricity consumption of 321 clients from 2012 to 2014. Traffic3 records the road occupancy rates at San Francisco Bay area freeways. Weather4 records 11 climate features from nearly 1,600 locations in the U.S in an hour intervals from 2010 to 2013.
Dataset Splits Yes We split the data into warm-up and online training phases by the ratio of 25:75. ... In the warm-up phase, we calculate the statistics to normalize online training samples, perform hyper-parameter cross-validation, and pre-train the models for the few methods.
Hardware Specification No The paper does not specify any particular hardware components like CPU or GPU models, or memory details used for running the experiments.
Software Dependencies No The paper mentions using the 'Adam W optimizer' but does not specify software versions for libraries, frameworks (e.g., PyTorch, TensorFlow), or programming languages used for implementation.
Experiment Setup Yes We split the data into warm-up and online training phases by the ratio of 25:75. ... During online learning, both the epoch and batch size are set to one... We cross-validate the hyper-parameters on the ETTh2 dataset and use it for the remaining ones. Particularly, we use the following configuration: Adapter s EMA coefficient γ = 0.9, Gradient EMA for triggering the memory interaction γ = 0.3, Memory triggering threshold τ = 0.75... for all benchmarks, we set the look-back window length to be 60 and vary the forecast horizon as H {1, 24, 48}.