CycleNet: Enhancing Time Series Forecasting through Modeling Periodic Patterns

Authors: Shengsheng Lin, Weiwei Lin, Xinyi HU, Wentai Wu, Ruichao Mo, Haocheng Zhong

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Cycle Net achieves state-of-the-art prediction accuracy in multiple domains including electricity, weather, and energy, while offering significant efficiency advantages by reducing over 90% of the required parameter quantity. 4 Experiments Datasets We utilized widely adopted benchmark datasets including the ETT series [59], Weather, Traffic, Electricity, and Solar-Energy [24].
Researcher Affiliation Academia 1School of Computer Science and Engineering, South China University of Technology, China 2Pengcheng Laboratory, China 3Department of Computer Science and Engineering, The Chinese University of Hong Kong 4College of Information Science and Technology, Jinan University, China
Pseudocode Yes with detailed pseudocode provided in Appendix B.1.
Open Source Code Yes The source code is available at: https://github.com/ACAT-SCUT/Cycle Net.
Open Datasets Yes Datasets We utilized widely adopted benchmark datasets including the ETT series [59], Weather, Traffic, Electricity, and Solar-Energy [24].
Dataset Splits Yes Following prior works such as Autoformer [51] and i Transformer [37], we split the ETTs dataset into training, validation, and test sets with a ratio of 6:2:2, while the other datasets were split in a ratio of 7:1:2.
Hardware Specification Yes All experiments in this paper were implemented using Py Torch [41], trained using the Adam [23] optimizer, and executed on a single NVIDIA Ge Force RTX 4090 GPU with 24 GB memory.
Software Dependencies No All experiments in this paper were implemented using Py Torch [41], trained using the Adam [23] optimizer, and executed on a single NVIDIA Ge Force RTX 4090 GPU with 24 GB memory.
Experiment Setup Yes Cycle Net was trained for 30 epochs with early stopping based on a patience of 5 on the validation set. The batch size was set uniformly to 256 for ETTs and the Weather dataset, and 64 for the remaining datasets. This adjustment was made because the latter datasets have a larger number of channels, requiring a relatively smaller batch size to avoid out-of-memory issues. The learning rate was selected from the range {0.002, 0.005, 0.01} based on the performance on the validation set. The hyperparameter W of Cycle Net is set by default to match the cycle length in Table 1. Additionally, the hidden layer size of Cycle Net/MLP was uniformly set to 512.