Between Stochastic and Adversarial Online Convex Optimization: Improved Regret Bounds via Smoothness
Authors: Sarah Sachs, Hedi Hadiji, Tim van Erven, Cristóbal Guzmán
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this work, we establish novel regret bounds for online convex optimization in a setting that interpolates between stochastic i.i.d. and fully adversarial losses. By exploiting smoothness of the expected losses, these bounds replace dependence on the maximum gradient length by the variance of the gradients, which was previously known only for linear losses. In addition, they weaken the i.i.d. assumption by allowing, for example, adversarially poisoned rounds, which were previously considered in the expert and bandit setting. Our results extend this to the online convex optimization framework. In the fully i.i.d. case, our bounds match the rates one would expect from results in stochastic acceleration, and in the fully adversarial case, they gracefully deteriorate to match the minimax regret. We further provide lower bounds showing that our regret upper bounds are tight for all intermediate regimes in terms of the stochastic variance and the adversarial variation of the loss gradients. |
| Researcher Affiliation | Academia | Sarah Sachs University of Amsterdam Korteweg-de Vries Institute for Mathematics s.c.sachs@uva.nl Hédi Hadiji University of Amsterdam Korteweg-de Vries Institute for Mathematics hedi.hadiji@gmail.com Tim van Erven University of Amsterdam Korteweg-de Vries Institute for Mathematics tim@timvanerven.nl Cristóbal Guzmán Pontificia Universidad Católica de Chile Institute for Mathematical and Computational Eng. Facultad de Matemáticas and Escuela de Ingeniería crguzmanp@mat.uc.cl |
| Pseudocode | No | The paper describes algorithms (OFTRL, OFTL) using mathematical equations (2) and (5) but does not provide structured pseudocode blocks or algorithms labeled as such. |
| Open Source Code | No | The paper does not provide any statement or link indicating that source code for the described methodology is openly available. |
| Open Datasets | No | This paper is theoretical and does not describe experiments involving datasets for training. Therefore, it does not provide information about public datasets. |
| Dataset Splits | No | This paper is theoretical and does not involve empirical experiments with data splits for training, validation, or testing. |
| Hardware Specification | No | This paper is theoretical and does not report on empirical experiments requiring specific hardware, so no hardware specifications are provided. |
| Software Dependencies | No | This paper is theoretical and does not describe software implementations or dependencies with version numbers. |
| Experiment Setup | No | This paper is theoretical and does not describe empirical experiments. Therefore, no experimental setup details, such as hyperparameters or system-level training settings, are provided. |