ANT: Adaptive Noise Schedule for Time Series Diffusion Models

Authors: Seunghan Lee, Kibok Lee, Taeyoung Park

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of our method across various tasks, including TS forecasting, refinement, and generation, on datasets from diverse domains. Code is available at this repository: https://github.com/seunghan96/ANT.
Researcher Affiliation Academia Seunghan Lee, Kibok Lee , Taeyoung Park Department of Statistics and Data Science, Yonsei University {seunghan9613,kibok,tpark}@yonsei.ac.kr
Pseudocode Yes Algorithm 1 Calculation of ANT score
Open Source Code Yes Code is available at this repository: https://github.com/seunghan96/ANT.
Open Datasets Yes In our experiments, we employ eight widely-used univariate datasets from various fields which can be found in Gluon TS [2] in their preprocessed form, with training and test splits provided. Table A.1 shows the statistics of the dataset, annotated with their corresponding frequencies (daily or hourly) and lengths of predictions.
Dataset Splits Yes Following TSDiff [15], the validation set is created by splitting a portion of the training dataset, with the split ratio determined by the sizes of the training and test datasets.
Hardware Specification No The paper discusses computational time and efficiency but does not specify the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions using 'Gluon TS' for data and metrics, but does not specify version numbers for any software dependencies like programming languages (e.g., Python), libraries (e.g., PyTorch, TensorFlow), or specific Gluon TS versions.
Experiment Setup Yes Table B.1 presents the hyperparameters of the backbone model utilized in our experiment, which are aligned with those used in TSDiff. Note that the diffusion step embeddings [35] are only applied to the model employing a non-linear schedule. Moreover, we employ skip connections for certain datasets, following the TSDiff approach, which results in improvements in validation performance.