SDformer: Similarity-driven Discrete Transformer For Time Series Generation

Authors: Zhicheng Chen, FENG SHIBO, Zhong Zhang, Xi Xiao, Xingyu Gao, Peilin Zhao

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Comprehensive experiments show that our method significantly outperforms competing approaches in terms of the generated time series quality while also ensuring a short inference time.
Researcher Affiliation Collaboration Zhicheng Chen1,2, , Shibo Feng3, Zhong Zhang2, Xi Xiao1,4, , Xingyu Gao5, Peilin Zhao2, 1Shenzhen International Graduate School, Tsinghua University 2Tencent AI Lab 3School of Computer Science and Engineering, Nanyang Technological University 4Key Laboratory of Data Protection and Intelligent Management (Sichuan University), Ministry of Education 5Institute of Microelectronics, Chinese Academy of Sciences
Pseudocode Yes The detailed training and inference algorithm of SDformer-ar are respectively shown in Algorithm 2 and 4 in Appendix E.
Open Source Code No We will release all the experimental related codes upon acceptance.
Open Datasets Yes To evaluate the performance of SDformer, we conduct experiments on 4 real-world datasets (Stocks, ETTh, Energy and f MRI) and 2 simulated datasets (Sines and Mu Jo Co). Table 1 provides a partial description of each dataset. For more detailed information, please refer to Appendix A.
Dataset Splits No The paper mentions 'unconditional generation task, we conduct five evaluations' and 'conditional generation task, we run the process five times', and uses a 'train-synthesis-and-test-real (TSTR) method' which implies a training set. However, it does not explicitly specify a validation set or validation split percentages/counts.
Hardware Specification Yes Our primary experiments are executed on an Nvidia V-100 GPU with the Adam W [23] optimizer.
Software Dependencies No The paper mentions using 'Adam W [23] optimizer' but does not specify version numbers for Python, PyTorch, CUDA, or any other libraries or frameworks.
Experiment Setup Yes Furthermore, we summarize the detailed hyperparameters of SDformer, shown as Table 5. The two values in {*,*} are the hyperparameters of SDformer-ar and SDformer-m respectively.