Online Linear Regression in Dynamic Environments via Discounting

Authors: Andrew Jacobsen, Ashok Cutkosky

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We develop algorithms for online linear regression which achieve optimal static and dynamic regret guarantees even in the complete absence of prior knowledge. We present a novel analysis showing that a discounted variant of the Vovk-Azoury-Warmuth forecaster achieves dynamic regret of the form RT (u) O (dlog(T) d P γ T (u)T), where P γ T (u) is a measure of variability of the comparator sequence, and show that the discount factor achieving this result can be learned on-the-fly. We show that this result is optimal by providing a matching lower bound.
Researcher Affiliation Academia 1Department of Computing Science, University of Alberta, Edmonton, Canada 2Department of Electrical and Computer Engineering, Boston University, Boston, Massachussetts. Correspondence to: Andrew Jacobsen <ajjacobs@ualberta.ca>.
Pseudocode Yes Algorithm 1: Discounted VAW Forecaster
Open Source Code No The paper does not provide any statements about open-sourcing code or links to a code repository.
Open Datasets No The paper is theoretical and does not use or refer to any specific publicly available datasets for experimental evaluation.
Dataset Splits No The paper does not describe dataset splits for training, validation, or testing as it focuses on theoretical analysis rather than empirical evaluation.
Hardware Specification No The paper is theoretical and does not discuss specific hardware used for experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers, as it focuses on theoretical algorithm design and analysis.
Experiment Setup No The paper does not describe a specific experimental setup, hyperparameters, or training configurations, as it focuses on theoretical algorithm design and analysis.