An Equivalence Between Static and Dynamic Regret Minimization
Authors: Andrew Jacobsen, Francesco Orabona
NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We study the problem of dynamic regret minimization in online convex optimization, in which the objective is to minimize the difference between the cumulative loss of an algorithm and that of an arbitrary sequence of comparators. While the literature on this topic is very rich, a unifying framework for the analysis and design of these algorithms is still missing. In this paper we show that for linear losses, dynamic regret minimization is equivalent to static regret minimization in an extended decision space. Using this simple observation, we show that there is a frontier of lower bounds trading off penalties due to the variance of the losses and penalties due to variability of the comparator sequence, and provide a framework for achieving any of the guarantees along this frontier. As a result, we also prove for the first time that adapting to the squared path-length of an arbitrary sequence of comparators to achieve regret RT (u1, . . . , u T ) O( p t ut ut+1 2) is impossible. However, using our framework we introduce an alternative notion of variability based on a locally-smoothed comparator sequence u1, . . . , u T , and provide an algorithm guaranteeing dynamic regret of the form RT (u1, . . . , u T ) e O( p i ui ui+1 2), while still matching in the worst case the usual path-length dependencies up to polylogarithmic terms. |
| Researcher Affiliation | Academia | Andrew Jacobsen Università degli Studi di Milano Politecnico di Milano contact@andrew-jacobsen.com; Francesco Orabona KAUST francesco@orabona.com |
| Pseudocode | Yes | Algorithm 1: Dynamic-to-Static Reduction; Algorithm 2: Dynamic regret OLO through 1-dimensional reduction [9] |
| Open Source Code | No | The paper is theoretical and does not mention releasing source code. The NeurIPS checklist indicates this is a theoretical paper and does not include experiments requiring code. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with datasets. The NeurIPS checklist indicates this is a theoretical paper. |
| Dataset Splits | No | The paper is theoretical and does not describe dataset splits for training, validation, or testing. The NeurIPS checklist indicates this is a theoretical paper. |
| Hardware Specification | No | The paper is theoretical and does not mention specific hardware used for experiments. The NeurIPS checklist indicates this is a theoretical paper. |
| Software Dependencies | No | The paper is theoretical and does not list specific software dependencies with version numbers for reproducibility. The NeurIPS checklist indicates this is a theoretical paper. |
| Experiment Setup | No | The paper is theoretical and does not describe an experimental setup with hyperparameters or training settings. The NeurIPS checklist indicates this is a theoretical paper. |