TimeSiam: A Pre-Training Framework for Siamese Time-Series Modeling
Authors: Jiaxiang Dong, Haixu Wu, Yuxuan Wang, Yun-Zhong Qiu, Li Zhang, Jianmin Wang, Mingsheng Long
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Time Siam consistently outperforms extensive advanced pre-training baselines, demonstrating superior forecasting and classification capabilities across 13 standard benchmarks in both intraand cross-domain scenarios. We perform extensive experiments across two mainstream time series analysis tasks: forecasting and classification, covering both inand cross-domain settings. |
| Researcher Affiliation | Academia | 1School of Software, BNRist, Tsinghua University. Jiaxiang Dong <djx20@mails.tsinghua.edu.cn>. Haixu Wu <wuhx23@mails.tsinghua.edu.cn>. Correspondence to: Mingsheng Long <mingsheng@tsinghua.edu.cn>. |
| Pseudocode | No | The paper describes its framework and process using figures and mathematical equations but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at https://github.com/thuml/Time Siam. |
| Open Datasets | Yes | Datasets We summarize the experimental benchmarks in Table 1, encompassing eleven well-established datasets and two newly constructed datasets, which cover two primary tasks in time series analysis: forecasting and classification. Please refer to Appendix B for a more comprehensive description. UCI. UCI Electricity Load Time Series Dataset. https: //archive.ics.uci.edu/ml/datasets/ Electricity Load Diagrams20112014. Goldberger, A. L., Amaral, L. A., Glass, L., Hausdorff, J. M., Ivanov, P. C., Mark, R. G., Mietus, J. E., Moody, G. B., Peng, C.-K., and Stanley, H. E. Physiobank, physiotoolkit, and physionet: components of a new research resource for complex physiologic signals. circulation, 101(23): e215 e220, 2000. |
| Dataset Splits | Yes | Table 12. Dataset descriptions. Samples are organized in (Train/Validation/Test). For example, ETTh1,ETTh2: 8,545/2,881/2,881 samples for Train/Validation/Test respectively. |
| Hardware Specification | Yes | In this paper, all experiments were conducted on a single NVIDIA A100 SXM4 80GB GPU and implemented using the Py Torch framework (Paszke et al., 2019) for five repetitions. |
| Software Dependencies | No | In this paper, all experiments were conducted on a single NVIDIA A100 SXM4 80GB GPU and implemented using the Py Torch framework (Paszke et al., 2019) for five repetitions. The reference to 'Paszke et al., 2019' indicates the origin of the framework but does not provide a specific version number (e.g., PyTorch 1.x or 2.x). |
| Experiment Setup | Yes | The configuration details are in Table A.3. Also, considering the size of the fine-tuned dataset and consistency with existing works, we fine-tune the model for 10 epochs for the prediction task and 50 epochs for the classification task. Table 10. Pre-training and fine-tuning configurations in forecasting and classification tasks. Table 11. Two experimental configurations of Time Siam with different model sizes. |