Diffusion-TS: Interpretable Diffusion for General Time Series Generation

Authors: Xinyu Yuan, Yan Qiao

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this section, we first study the interpretable outputs of the proposed model. Then we evaluate our method in two modes: unconditional and conditional generation, to verify the quality of the generated signals. [...] Finally, we conduct experiments to validate the performance of Diffusion TS under insufficient and irregular settings. Implementation details and ablation study can be found in Appendix G and C.7, respectively.
Researcher Affiliation Academia Xinyu Yuan, Yan Qiao Hefei University of Technology yxy5315@gmail.com, qiaoyan@hfut.edu.cn
Pseudocode Yes Algorithm 1 Reconstruction-guided Sampling and Algorithm 2 Optimized Conditional Sampling
Open Source Code Yes The code is available at https://github.com/Y-debug-sys/Diffusion-TS.
Open Datasets Yes We use 4 real-world datasets and 2 simulated datasets in Table 11 to evaluate our method. Stocks is the Google stock price data from 2004 to 2019. [...] ETTh dataset contains the data collected from electricity transformers, [...] Energy is a UCI appliance energy prediction dataset [...]. f MRI is a benchmark for causal discovery [...]. Sines has 5 features [...]. Mu Jo Co is multivariate physics simulation time series data [...]. Table 11: Dataset Details. Dataset # of Samples dim Link Sines 10000 5 https://github.com/jsyoon0823/TimeGAN Stocks 3773 6 https://finance.yahoo.com/quote/GOOG ETTh(1) 17420 7 https://github.com/zhouhaoyi/ETDataset Mu Jo Co 10000 14 https://github.com/deepmind/dm_control Energy 19711 28 https://archive.ics.uci.edu/ml/datasets f MRI 10000 50 https://www.fmrib.ox.ac.uk/datasets
Dataset Splits No We use 90% of the dataset for training and the rest for testing.
Hardware Specification Yes A single Nvidia 3090 GPU is used for model training.
Software Dependencies No The paper mentions software components like GRU-based neural networks and implicitly PyTorch (from the prompt, not paper text directly), but does not provide specific version numbers for these or other software dependencies required for reproduction.
Experiment Setup Yes We did limited hyperparameter tuning in this study to find default hyperparemters that perform well across datasets. The range considered for each hyper-parameter is: batch size : [32; 64; 128], the number of attention heads: [4; 8], the number of basic dimension: [32, 64, 96, 128], the diffusion steps: [50, 200, 500, 1000] and the guidance strength: [1., 1e-1, 5e-2, 1e-2, 1e-3]. [...] In all of our experiments, we use cosine noise scheduling and optimize our network using Adam with (β1, β2) = (0.9, 0.96). And a linearly decay learning rate starts at 0.0008 after 500 iterations of warmup. For conditional generation, we set the inference steps, γ to be 200, 0.05 respectively. Table 8: Hyperparameters, training details, and compute resources used for each model