Boosting multi-step autoregressive forecasts

Authors: Souhaib Ben Taieb, Rob Hyndman

ICML 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental First, we investigate the performance of the proposed strategy in terms of bias and variance decomposition of the error using simulated time series. Then, we evaluate the proposed strategy on real-world time series from two forecasting competitions. Overall, we obtain excellent performance with respect to the standard forecasting strategies.
Researcher Affiliation Academia Souhaib Ben Taieb SBENTAIE@ULB.AC.BE Machine Learning Group, Computer Science Department, Faculty of Sciences, Universit e Libre de Bruxelles, Brussels, Belgium. Rob J Hyndman ROB.HYNDMAN@MONASH.EDU Department of Econometrics and Business Statistics, Monash University, Clayton VIC 3800, Australia.
Pseudocode Yes Algorithm 1 The boost strategy
Open Source Code No The paper does not provide explicit concrete access to source code for the methodology described.
Open Datasets Yes We now evaluate our boost strategy on time series from the M3 (Makridakis & Hibon, 2000) and the NN5 forecasting competitions.1 The M3 competition dataset consists of 3003 monthly, quarterly, and annual time series. The NN5 competition dataset comprises M = 111 daily time series with T = 735 observations and H = 56 days (8 weeks). 1Forecasting competition for artificial neural networks and computational intelligence, 2008, www.neural-forecasting-competition.com/NN5/.
Dataset Splits Yes Finally, we select the different model hyperparameters using a time-series cross-validation procedure (also called rolling origin).
Hardware Specification No The paper does not provide specific hardware details (GPU/CPU models, memory, etc.) used for running its experiments.
Software Dependencies No The paper mentions software components and methods (e.g., P-splines, STL, neural network model) but does not provide specific version numbers for them.
Experiment Setup Yes P-splines require the selection of two additional parameters: the number of knots and the smoothing parameter. However, Ruppert (2002) has shown that the number of knots does not have much effect on the estimation provided enough knots are used. The weakness of the P-spline is measured by its degrees of freedom (df). B uhlmann and Yu (2003) and Schmid and Hothorn (2008) proposed that the smoothing parameter be set to give a small value of df (i.e., df [3, 4]), and that this number be kept fixed in each boosting iteration.