MG-TSD: Multi-Granularity Time Series Diffusion Models with Guided Learning Process

Authors: Xinyao Fan, Yueying Wu, Chang Xu, Yuhao Huang, Weiqing Liu, Jiang Bian

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments conducted on real-world datasets demonstrate that our MG-TSD model outperforms existing time series prediction methods. In this section, we conduct extensive experiments on six real-world datasets to evaluate the performance of the proposed MG-TSD model and compare it with previous state-of-the-art baselines.
Researcher Affiliation Collaboration University of British Columbia1, Peking University2, Nanjing University3, Microsoft Research4
Pseudocode Yes Algorithm 1 Training procedure
Open Source Code Yes Our code is available at https://github.com/Hundredl/MG-TSD.
Open Datasets Yes (i) Solar: https://www.nrel.gov/grid/solar-power-data.html (ii) Electricity: https://archive.ics.uci.edu/dataset/321/electricit yloaddiagrams20112014 (iii) Traffic: https://archive.ics.uci.edu/dataset/204/pems+sf (iv) Taxi: https://www.nyc.gov/site/tlc/about/tlc-trip-record-data.pa ge (v) KDD-cup: https://www.kdd.org/kdd2018/kdd-cup (vi) Wikipedia: https://github.com/mbohlkeschneider/gluon-ts/tree/mv _release/datasets
Dataset Splits No The paper does not explicitly provide specific training/validation/test dataset splits (e.g., percentages, sample counts, or clear methodology for partitioning the data into these sets) for reproducibility.
Hardware Specification Yes All models are trained and tested on a single NVIDIA A100 80GB GPU. These experiments were executed using a single A6000 card with 48G memory capacity.
Software Dependencies No The MG-TSD code in this study is implemented using Py Torch (Paszke et al., 2019). It utilizes the Pytorch TS library (Rasul, 2021), which enables convenient integration of Py Torch models with the Gluon TS library (Alexandrov et al., 2020b) on which we heavily rely for data preprocessing, model training, and evaluation in our experiments. The paper mentions software by name but does not provide specific version numbers (e.g., 'PyTorch 1.9').
Experiment Setup Yes We train our model for 30 epochs using the Adam optimizer with a fixed learning rate of 10-5. We set the mini-batch size to 128 for solar and 32 for other datasets. The diffusion step is configured as 100. Additional hyper-parameters, such as share ratios, granularity levels, and loss weights, are detailed in Appendix C.3. And Table 7 provides 'Tested hyper-parameter values for the MG-TSD Model.'