Self-Supervised Contrastive Learning for Long-term Forecasting

Authors: Junwoo Park, Daehoon Gwak, Jaegul Choo, Edward Choi

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments demonstrate that our approach outperforms 14 baseline models in multiple experiments over nine long-term benchmarks, especially in challenging scenarios that require a significantly long output for forecasting.
Researcher Affiliation Academia Junwoo Park, Daehoon Gwak, Jaegul Choo, Edward Choi Kim Jaechul Graduate School of AI, KAIST, Daejeon, Republic of Korea {junwoo.park,daehoon.gwak,jchoo,edwardchoi}@kaist.ac.kr
Pseudocode Yes Algorithm 1 Auto Con: Autocorrelation-based Constrastive Learning Framework
Open Source Code Yes Source code is available at https://github.com/junwoopark92/Self-Supervised-Contrastive-Forecsating. [...] Our source code can be accessed at a zip file in the supplementary.
Open Datasets Yes To validate our proposed method, we conducted extensive experiments on nine real-world datasets from six domains: mechanical systems (ETT), energy (Electricity), traffic (Traffic), weather (Weather), economics (Exchange), and disease (ILI). We follow standard protocol (Wu et al., 2021) and split all datasets into training, validation, and test sets in chronological order by the ratio of 6:2:2.
Dataset Splits Yes We follow standard protocol (Wu et al., 2021) and split all datasets into training, validation, and test sets in chronological order by the ratio of 6:2:2. [...] Therefore, we adopt a ratio of 6:2:2 for all datasets.
Hardware Specification No The paper states 'A batch size of 32 was used, and all measurements were taken independently in the same GPU and server environment.' However, it does not provide specific models or specifications for the GPU or server used.
Software Dependencies No The paper mentions 'All models were implemented using Py Torch' and 'Our redesigned model and Auto Con were implemented based on the TSlib code repository', but it does not specify version numbers for PyTorch, TSlib, or any other software dependencies.
Experiment Setup Yes The input length I is set to 14 (for the ILI dataset), 48 (for the Exchange dataset), 192 (for the ETTm dataset), and 96 (for the others datasets). [...] The hyperparamer sensitivity analysis is available in Appendix A.6. [...] A batch size of 32 was used...