Minimax Time Series Prediction

Authors: Wouter M. Koolen, Alan Malek, Peter L. Bartlett, Yasin Abbasi Yadkori

NeurIPS 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we study the time series prediction problem in the regret framework; instead of making assumptions on the data generating process, we ask: can we predict the data sequence online almost as well as the best offline prediction method in some comparison class (in this case, offline means that the comparator only needs to model the data sequence after seeing all of it)? Our main contribution is computing the exact minimax strategy for a range of time series prediction problems. As a concrete motivating example, let us pose the simplest nontrivial such minimax problem
Researcher Affiliation Academia Wouter M. Koolen Centrum Wiskunde & Informatica wmkoolen@cwi.nl Alan Malek UC Berkeley malek@berkeley.edu Peter L. Bartlett UC Berkeley & QUT bartlett@cs.berkeley.edu Yasin Abbasi-Yadkori Queensland University of Technology yasin.abbasiyadkori@qut.edu.au
Pseudocode No The paper describes algorithms and recurrences (e.g., in Section 4 and 5) but does not include formal pseudocode blocks or algorithms labeled as such.
Open Source Code No The paper does not mention releasing code or provide any links to a code repository.
Open Datasets No The paper is theoretical and does not use or mention any specific dataset for training or provide access information for a dataset.
Dataset Splits No The paper is theoretical and does not describe experimental validation or provide dataset split information.
Hardware Specification No The paper is theoretical and does not mention any specific hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not mention specific software dependencies with version numbers for implementation or experiments.
Experiment Setup No The paper is theoretical and does not describe an experimental setup with hyperparameters or training settings.