Denoising-Aware Contrastive Learning for Noisy Time Series
Authors: Shuang Zhou, Daochen Zha, Xiao Shen, Xiao Huang, Rui Zhang, Korris Chung
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on various datasets verify the effectiveness of our method. We perform empirical evaluations to answer the following research questions: RQ1: How effective is DECL for unsupervised representation learning? RQ2: Is the method effective with fine-tuning? RQ3: What are the effects of each component? RQ4: Is it robust with varied degrees of noise? RQ5: How sensitive is it to the hyper-parameters? RQ6: How does the method work in practice? |
| Researcher Affiliation | Academia | 1Department of Computing, The Hong Kong Polytechnic University 2Department of Computer Science, Rice University 3School of Computer Science and Technology, Hainan University 4Department of Surgery, University of Minnesota |
| Pseudocode | No | The paper does not contain any clearly labeled pseudocode or algorithm blocks. The methodology is described in natural language and illustrated with a diagram. |
| Open Source Code | Yes | The code is open-sourced. 1https://github.com/betterzhou/DECL |
| Open Datasets | Yes | We employ five noisy time series datasets. Sleep EDF [Goldberger et al., 2000] is an EEG dataset... CPSC18 [Liu et al., 2018], PTB-XL [Wagner et al., 2020], and Georgia [Alday et al., 2020] are ECG datasets... |
| Dataset Splits | Yes | We split the data into 40%, 20%, and 40% for training, validation, and test set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used, such as exact GPU or CPU models, or detailed computing environment specifications for running experiments. |
| Software Dependencies | No | The paper mentions 'Adam optimizer' and 'Transformer [Vaswani et al., 2017]' but does not provide specific version numbers for software dependencies or libraries used in the implementation. |
| Experiment Setup | Yes | We set the learning epochs as 100 and adopt a batch size of 128 for both pre-training and downstream tasks... In the transformer, we set L as 4, the number of heads as 4, and the hidden dimension size as 100. The details of the encoder and AR module can be referred to in Appendix C.2. As for the hyper-parameters, we set k as 30% of the total timestamps, assign α as 0.5, and set γ as 0.1 for all the datasets. The method is optimized with Adam optimizer; we set the learning rate as 1e-4 and the weight decay as 5e-4. |