DEPTS: Deep Expansion Learning for Periodic Time Series Forecasting

Authors: Wei Fan, Shun Zheng, Xiaohan Yi, Wei Cao, Yanjie Fu, Jiang Bian, Tie-Yan Liu

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments on both synthetic data and real-world data demonstrate the effectiveness of DEPTS on handling PTS.
Researcher Affiliation Collaboration Wei Fan1 , Shun Zheng2, Xiaohan Yi2, Wei Cao2, Yanjie Fu1, Jiang Bian2, Tie-Yan Liu2 1University of Central Florida 2Microsoft Research
Pseudocode Yes Algorithm 1: Parameter initialization for the periodicity module.
Open Source Code Yes All codes are publicly available at https://github.com/weifantt/DEPTS.
Open Datasets Yes We adopt three existing PTS-related datasets, ELECTRICITY1, TRAFFIC2, and M4 (HOURLY)3, which contain various long-term (quarterly, yearly), mid-term (monthly, weekly), and short-term (daily, hourly) periodic effects corresponding to regular economic and social activities. These datasets serve as common benchmarks for many recent studies (Yu et al., 2016; Rangapuram et al., 2018; Salinas et al., 2020; Oreshkin et al., 2020). ... we construct two new benchmarks with sufficiently long PTS from public data sources. The first one, denoted as CAISO, contains eight-years hourly actual electricity load series in different zones of California4. The second one, referred to as NP, includes eight-years hourly energy production volume series in multiple European countries5.
Dataset Splits Yes For all benchmarks, we search for the best hyper-parameters of DEPTS on the validation set. ... We divide the whole PTS signals x0:T into the training part Dtrain = x0:Tv and the validation part Dval = x Tv:T , where Tv is the split time-step. ... all these hyper-parameters are searched on a validation set, which is defined as the last week before the test split.
Hardware Specification No The paper does not provide specific details about the hardware used, such as CPU/GPU models, memory, or processing units. It only refers to experiments being run, without specifying the underlying machines.
Software Dependencies No The paper mentions using 'Adam (Kingma & Ba, 2014)' as an optimizer and 'Auto ARIMA implementation provided by Löning et al. (2019)' for a baseline, but it does not specify software dependencies with version numbers for its own implementation (e.g., Python version, specific deep learning frameworks like TensorFlow or PyTorch, or other libraries with their versions).
Experiment Setup Yes Table 4 and Table 5 provide detailed hyper-parameters for N-BEATS and DEPTS, respectively, including 'Iterations', 'Loss', 'Forecast horizon (H)', 'Lookback horizon', 'Training horizon', 'Layer number', 'Layer size', 'Batch size', 'Learning rate', and 'Optimizer'.