Periodicity Decoupling Framework for Long-term Series Forecasting

Authors: Tao Dai, Beiliang Wu, Peiyuan Liu, Naiqi Li, Jigang Bao, Yong Jiang, Shu-Tao Xia

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experimental results across seven real-world long-term time series datasets demonstrate the superiority of our method over other state-of-the-art methods, in terms of both forecasting performance and computational efficiency.
Researcher Affiliation Collaboration 1College of Computer Science and Software Engineering, Shenzhen University, China 2National Engineering Laboratory for Big Data System Computing Technology, Shenzhen University, China 3Tsinghua Shenzhen International Graduate School, Tsinghua University, Shenzhen, China 4We Bank Institute of Financial Technology, Shenzhen University, China
Pseudocode No The paper provides architectural diagrams and mathematical equations, but does not include any explicitly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code is available at https://github.com/Hank0626/PDF.
Open Datasets Yes We conduct extensive experiments on seven popular real-world datasets (Zhou et al., 2021), including Electricity Transformer Temperature (ETT) with its four sub-datasets (ETTh1, ETTh2, ETTm1, ETTm2), Weather, Electricity, and Traffic.
Dataset Splits Yes We adopt the same train/val/test splits ratio 0.6:0.2:0.2 as Zhou et al. (2021) for the ETT datasets and split the remaining three by the ratio of 0.7:0.1:0.2 following Wu et al. (2021).
Hardware Specification No The paper discusses computational efficiency and provides Multiply-Accumulate Operations (MACs) comparisons, but it does not specify any particular hardware used for running the experiments, such as GPU or CPU models.
Software Dependencies No The paper mentions general deep learning components and techniques like FFT, Batch Norm, and Conv1d, but it does not provide specific version numbers for any software dependencies or libraries (e.g., PyTorch, TensorFlow, or specific Python packages).
Experiment Setup Yes All of the models adopt the same prediction length T = {96, 192, 336, 720}. For the look-back window t, we conduct experiments on PDF using t = 336 and t = 720... Denote the patch length as p and the stride length as s, we divide Xi 2D Rfi pi along dimension pi and aggregate along dimension fi to form a patch.