Deep Switching Auto-Regressive Factorization: Application to Time Series Forecasting

Authors: Amirreza Farnoosh, Bahar Azari, Sarah Ostadabbas7394-7403

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our extensive experiments, which include simulated data and real data from a wide range of applications such as climate change, weather forecasting, traffic, infectious disease spread and nonlinear physical systems attest the superior performance of DSARF in terms of longand short-term prediction error, when compared with the state-of-the-art methods.
Researcher Affiliation Academia Department of Electrical and Computer Engineering, Northeastern University, Boston, Massachusetts, USA. {afarnoosh, azari, ostadabbas}@ece.neu.edu
Pseudocode No No structured pseudocode or algorithm blocks were found in the paper.
Open Source Code Yes The source code and experiments are available at https://github.com/ostadabbas/DSARF
Open Datasets Yes Table 1 provides dataset descriptions and links to publicly available datasets: 'Birmingham2 https://data.birmingham.gov.uk/dataset/birmingham-parking', 'Guangzhou3 https://doi.org/10.5281/zenodo.1205229', 'Hangzhou4 https://tianchi.aliyun.com/competition/entrance/231708/', 'Seattle5 https://github.com/zhiyongc/Seattle-Loop-Data', 'PST6 http://iridl.ldeo.columbia.edu/', 'Flu7 https://www.google.org/flutrends/about/', 'Dengue8 https://www.google.org/flutrends/about/', 'Precipitation9 https://www.ncdc.noaa.gov/'. Citations are also provided for Bat (Bergou et al. 2015) and Apnea (Rigney 1994; Goldberger et al. 2000).
Dataset Splits No Table 1 provides train and test splits (Ttest values, indicating the last Ttest time points for test), but does not explicitly specify a distinct validation set split or percentage for all datasets.
Hardware Specification Yes We learned and tested all the models on an Intel Core i7 CPU@3.7GHz with 8 GB of RAM.
Software Dependencies Yes We implemented DSARF in Py Torch v1.3 (Paszke et al. 2017) and used the Adam optimizer (Kingma and Ba 2014)
Experiment Setup Yes We implemented DSARF in Py Torch v1.3 (Paszke et al. 2017) and used the Adam optimizer (Kingma and Ba 2014) with learning rate of 0.01. We initialized all parameters randomly and adopted a linear KL annealing schedule, (Bowman et al. 2016), to increase from 0.01 to 1 over the course of 100 epochs... 500 epochs sufficed for most experiments.