Frequency Adaptive Normalization For Non-stationary Time Series Forecasting

Authors: Weiwei Ye, Songgaojun Deng, Qiaosha Zou, Ning Gui

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We instantiate FAN on four widely used forecasting models as the backbone and evaluate their prediction performance improvements on eight benchmark datasets.
Researcher Affiliation Collaboration 1Central South University 2University of Amsterdam 3Zhejiang Lab
Pseudocode Yes GPU-friendly Py Torch pseudocode is in Appendix A.2. Listing 1: GPU-Friendly Implimentation of FRL
Open Source Code Yes Our code is publicly available2. http://github.com/wayne155/FAN
Open Datasets Yes Datasets. We use eight popular datasets in multivariate time series forecasting as benchmarks, including: (1-4) ETT (Electricity Transformer Temperature) 3[47]... (5) Electricity 4... (6) Exchange Rate 5... (7) Traffic 6... (8) Weather 7... 3https://github.com/zhouhaoyi/ETDataset
Dataset Splits Yes The split ratio for training, validation, and test sets is set to 7:2:1 for all the datasets.
Hardware Specification Yes All the experiments are implemented by Py Torch [34] and are conducted for five runs with fixed seeds {1, 2, 3, 4, 5} on NVIDIA RTX 4090 GPU (24GB).
Software Dependencies No All the experiments are implemented by Py Torch [34]. No version number for PyTorch or other specific libraries mentioned.
Experiment Setup Yes We used a batch size of 32, a learning rate of 0.0003, and trained each run for 100 epochs, with an early stopper set to patience as 5. For the different baselines, we follow the implementation and settings provided in their official code repository. ADAM [18] as the default optimizer across all the experiments.