WaveBound: Dynamic Error Bounds for Stable Time Series Forecasting

Authors: Youngin Cho, Daejin Kim, DONGMIN KIM, MOHAMMAD AZAM KHAN, Jaegul Choo

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental With the extensive experiments, we show that Wave Bound consistently improves upon the existing models in large margins, including the state-of-the-art model. We evaluate our Wave Bound method on real-world benchmarks using various time series forecasting models, including the state-of-the-art models.
Researcher Affiliation Collaboration Youngin Cho* Daejin Kim* Dongmin Kim Mohammad Azam Khan Jaegul Choo KAIST AI {choyi0521,kiddj,tommy.dm.kim,azamkhan,jchoo}@kaist.ac.kr. This work was also partially supported by NAVER Corp.
Pseudocode Yes Figure 2: Our proposed Wave Bound method provides the dynamic error bounds of the training loss for each time step and feature using the target network. The target network gτ is updated with the EMA of the source network gθ... A summary of Wave Bound is provided in Appendix B.
Open Source Code Yes Did you include the code, data, and instructions needed to reproduce the main experimental results (either in the supplemental material or as a URL)? [Yes] See Appendix. Did you include any new assets either in the supplemental material or as a URL? [Yes] The codes are provided in the supplementary material.
Open Datasets Yes Datasets. We examine the performance of forecasting models in six real-world benchmarks. (1) The Electricity Transformer Temperature (ETT) [7] dataset... (2) The Electricity (ECL) 2 dataset... (3) The Exchange [14] dataset... (4) The Traffic 3 dataset... (5) The Weather 4 dataset... (6) The ILI 5 dataset... (with associated URLs in footnotes).
Dataset Splits Yes We split each dataset into train/validation/test as follows: 6:2:2 ratio for the ETT dataset and 7:1:2 ratio for the rest.
Hardware Specification No The paper does not explicitly describe the specific hardware used (e.g., GPU/CPU models, memory specifications) for running its experiments within the provided main text. It indicates in the self-assessment that it included such details, likely in an appendix not provided.
Software Dependencies No The paper mentions several deep learning models (Autoformer, Pyraformer, Informer, LSTNet, TCN, N-BEATS) but does not provide specific version numbers for any software dependencies or libraries (e.g., PyTorch, TensorFlow, Python versions).
Experiment Setup Yes As in Autoformer [5], we set L = 36 and M {24, 36, 48, 60} for the ILI dataset, and set L = 96 and M {96, 192, 336, 720} for the other datasets. where ϵ is a hyperparameter indicating how far the error bound of the source network can be from the error of the target network.