Time Series Contrastive Learning with Information-Aware Augmentations

Authors: Dongsheng Luo, Wei Cheng, Yingheng Wang, Dongkuan Xu, Jingchao Ni, Wenchao Yu, Xuchao Zhang, Yanchi Liu, Yuncong Chen, Haifeng Chen, Xiang Zhang

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments on various datasets show highly competitive performance with up to 12.0% reduction in MSE on forecasting tasks and up to 3.7% relative improvement in accuracy on classification tasks over the leading baselines. We compare Info TS with SOTA baselines on time series forecasting and classification tasks. We also conduct case studies to show insights into the proposed criteria and meta-learner network. Detailed experimental setups are shown in Appendix. Full experimental results and extra experiments are presented in Appendix.
Researcher Affiliation Collaboration Dongsheng Luo1, Wei Cheng2, Yingheng Wang3, Dongkuan Xu4 Jingchao Ni5, Wenchao Yu2, Xuchao Zhang6, Yanchi Liu2, Yuncong Chen2, Haifeng Chen2, Xiang Zhang7 1 Florida International University 2 NEC Lab America 3 Cornell University 4 North Carolina State University 5 AWS AI Labs 6 Microsoft 7 The Pennsylvania State University
Pseudocode No The paper describes the proposed method in prose and a high-level figure (Figure 1), but does not include any explicitly labeled 'Pseudocode' or 'Algorithm' blocks.
Open Source Code No The paper does not contain any explicit statements about releasing source code for the described methodology or a link to a code repository.
Open Datasets Yes Four benchmark datasets for time series forecasting are adopted, including ETTh1, ETTh2, ETTm1 (Zhou et al. 2021), and the Electricity dataset (Yue et al. 2022). These datasets are used in both univariate and multivariate settings.
Dataset Splits No The paper states using a 'training split' and 'test set' but does not provide specific percentages, sample counts, or explicit details about validation splits for reproducibility.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running its experiments.
Software Dependencies No The paper does not explicitly list specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) required to replicate the experiment.
Experiment Setup No The paper states 'Detailed experimental setups are shown in Appendix.' but does not include specific hyperparameter values or detailed training configurations within the provided main text.