CondTSF: One-line Plugin of Dataset Condensation for Time Series Forecasting
Authors: Jianrong Ding, Zhanyu Liu, Guanjie Zheng, Haiming Jin, Linghe Kong
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct extensive experiments on eight commonly used time series datasets. Cond TSF consistently improves the performance of all previous dataset condensation methods across all datasets, particularly at low condensing ratios. |
| Researcher Affiliation | Academia | Jianrong Ding1,2 , Zhanyu Liu1 , Guanjie Zheng1 , Haiming Jin1, Linghe Kong1 1 School of Electronic Information and Electrical Engineering, Shanghai Jiao Tong University 2 Zhiyuan College, Shanghai Jiao Tong University {rafaelding,zhyliu00,gjzheng,jinhaiming,linghe.kong}@sjtu.edu.cn |
| Pseudocode | Yes | Algorithm 1 Dataset Condensation with Cond TSF (MTT[3] as backbone) |
| Open Source Code | Yes | We attach the code needed to reproduce the results with the paper. |
| Open Datasets | Yes | We conduct extensive experiments on eight commonly used time series datasets. For all datasets, the model is set to be using 24 steps of data to forecast 24 steps of data. We set the length of the synthetic dataset as 48, as shown in Table.2. Each synthetic dataset can only generate one training pair. |
| Dataset Splits | No | The source dataset is first divided into a train set and a test set. |
| Hardware Specification | Yes | All the experiments are carried out on an NVIDIA RTX 3080Ti. |
| Software Dependencies | No | The paper mentions using DLinear, MLP, LSTM, and CNN models, and refers to existing dataset condensation models, but it does not specify software dependencies with version numbers (e.g., Python version, specific library versions like PyTorch or TensorFlow). |
| Experiment Setup | Yes | For all datasets, the model is set to be using 24 steps of data to forecast 24 steps of data. We set the length of the synthetic dataset as 48, as shown in Table.2. Each synthetic dataset can only generate one training pair... Cond TSF is set to update every 3 epochs and the additive update ratio β is set to be 0.01. |