An NCDE-based Framework for Universal Representation Learning of Time Series
Authors: Zihan Liu, Bowen Du, Junchen Ye, Xianqing Wen, Leilei Sun
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, extensive experiments demonstrate the superiority of CTRL in forecasting, classification, and imputation tasks, particularly its outstanding robustness to missing data. |
| Researcher Affiliation | Collaboration | 1SKLSDE Lab, Beihang University, Beijing, China 2Shanghai AI Laboratory, Shanghai, China 3School of Transportation Science and Engineering, Beihang University, Beijing, China |
| Pseudocode | No | The paper does not contain any pseudocode or clearly labeled algorithm blocks. The methods are described in textual form and through diagrams. |
| Open Source Code | Yes | The source code is publicly available at https://github.com/Liu ZH-19/CTRL. |
| Open Datasets | Yes | For time series forecasting task, we utilize four popular real-world datasets: Exchange Rate [Lai et al., 2018], Wind [Wu et al., 2020], Weather1, and ILI [Wu et al., 2021]. For time series classification task, we select 18 datasets from the UCR, UEA Time Series Classification Archive [Dau et al., 2019; Bagnall et al., 2018]. |
| Dataset Splits | No | The paper mentions using well-known datasets that often have standard splits (e.g., UCR, UEA archives). However, it does not explicitly state the train/validation/test split percentages or sample counts used for reproduction within the paper's text. For imputation, it states 'To compare the model capacity under different proportions of missing data, we randomly mask the time points in the ratio of {12.5%, 25%, 37.5%, 50%}', but this is not a general dataset split. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware used for running experiments, such as GPU models, CPU types, or memory specifications. While Table 4 mentions 'Param.' (parameters), this refers to model size, not hardware. |
| Software Dependencies | No | The paper does not list specific software dependencies with their version numbers (e.g., Python 3.x, PyTorch 1.x) that would be needed for reproduction. |
| Experiment Setup | Yes | The representation dimension C is set to 320. The batch size B is set to 128 by default... The learning rate is 0.001... we set the mask ratio rm to 0.5 and the average length of continuous masking lm to 5. The parameter α for the reconstruction loss is set to 0.8. The loss term trade-off parameter λ is tuned from the set {0.01, 0.05, 0.1, 0.5, 1}... The integrand fθ1 is a feedforward network with 5 fully connected layers and 64 hidden channels... ζθ2 is implemented as a fully connected network with 2 layers, where the hidden channel size is set to 128. |