Deep Frequency Derivative Learning for Non-stationary Time Series Forecasting

Authors: Wei Fan, Kun Yi, Hangting Ye, Zhiyuan Ning, Qi Zhang, Ning An

IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct extensive experiments on several datasets which show the consistent superiority in both time series forecasting and shift alleviation. We have conducted extensive experiments on seven real-world datasets, which have demonstrated the consistent superiority compared with state-of-the-art methods in both time series forecasting and shift alleviation.
Researcher Affiliation Academia 1Medicial Sciences Division, University of Oxford 2School of Computer Science and Technology, Beijing Institute of Technology 3School of Artificial Intelligence, Jilin University 4Computer Network Information Center, Chinese Academy of Science 5Department of Computer Science and Technology, Tongji University 6School of Computer Science and Information Engineering, Hefei University of Technology
Pseudocode No The paper describes its methods in text and equations but does not provide any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an explicit statement about releasing source code or a link to a code repository.
Open Datasets Yes We follow previous work [Wu et al., 2021; Zhou et al., 2022; Nie et al., 2023; Yi et al., 2023b] to evaluate our DERITS on different representative datasets from various application scenarios, including Electricity [Asuncion and Newman, 2007], Traffic [Wu et al., 2021], ETT [Zhou et al., 2021], Exchange [Lai et al., 2018], ILI [Wu et al., 2021], and Weather [Wu et al., 2021].
Dataset Splits Yes We preprocess all datasets following the recent frequency learning work [Yi et al., 2023b] to normalize the datasets and split the datasets into training, validation, and test sets by the ratio of 7:2:1.
Hardware Specification Yes We conduct our experiments on a single NVIDIA RTX 3090 24GB GPU with Py Torch 1.8 [Paszke et al., 2019].
Software Dependencies Yes We conduct our experiments on a single NVIDIA RTX 3090 24GB GPU with Py Torch 1.8 [Paszke et al., 2019].
Experiment Setup Yes We take MSE (Mean Squared Error) as the loss function and report the results of MAE (Mean Absolute Errors) and RMSE (Root Mean Squared Errors) as the evaluation metrics. A lower MAE/RMSE indicates better performance of time series forecasting. More detailed information about the implementation is included Appendix A.3. ... We set the lookback window size L as 96 and vary the prediction length H in {96, 192, 336, 720}; for traffic dataset, the prediction length H is {48, 96, 192, 336}.