Deep Explicit Duration Switching Models for Time Series

Authors: Abdul Fatir Ansari, Konstantinos Benidis, Richard Kurle, Ali Caner Turkmen, Harold Soh, Alexander J. Smola, Bernie Wang, Tim Januschowski

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical results on multiple datasets demonstrate that RED-SDS achieves considerable improvement in time series segmentation and competitive forecasting performance against the state of the art.
Researcher Affiliation Collaboration 1Amazon Research 2National University of Singapore
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any concrete statements or links regarding the release of its source code.
Open Datasets Yes We used the publicly available dancing bees dataset [40]... We evaluated RED-SDS in the context of time series forecasting on 5 popular public datasets available in Gluon TS [4]
Dataset Splits No The paper mentions 'training' and 'test' data but does not explicitly describe a 'validation' dataset split.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper mentions software like Gluon TS [4] but does not specify version numbers for any software dependencies.
Experiment Setup Yes For all the datasets, we set the number of switches equal to the number of ground truth operating modes... We used a forecast window of 150 days and 168 hours for datasets with daily and hourly frequency, respectively... The probabilistic forecasts are conditioned on the training range and computed with 100 samples for each method.