Regret Bounds without Lipschitz Continuity: Online Learning with Relative-Lipschitz Losses

Authors: Yihan Zhou, Victor Sanches Portella, Mark Schmidt, Nicholas Harvey

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work, we consider OCO for relative Lipschitz and relative strongly convex functions. We extend the known regret bounds for classical OCO algorithms to the relative setting. Specifically, we show regret bounds for the follow the regularized leader algorithms and a variant of online mirror descent. Due to the generality of these methods, these results yield regret bounds for a wide variety of OCO algorithms. Furthermore, we further extend the results to algorithms with extra regularization such as regularized dual averaging.
Researcher Affiliation Academia Yihan Zhou University of British Columbia Victor S. Portella University of British Columbia Mark Schmidt University of British Columbia CCAI Affiliate Chair (Amii) Nicholas J. A. Harvey University of British Columbia
Pseudocode Yes Algorithm 1 Follow the Regularized Leader (FTRL) Algorithm; Algorithm 2 Dual-Stabilized Online Mirror Descent
Open Source Code No The paper does not provide any concrete access to source code for the methodology described.
Open Datasets No This is a theoretical paper and does not use datasets for training or evaluation.
Dataset Splits No This is a theoretical paper and does not define dataset splits for validation.
Hardware Specification No This is a theoretical paper and does not involve computational experiments, thus no hardware specifications are mentioned.
Software Dependencies No This is a theoretical paper and does not specify any software dependencies with version numbers.
Experiment Setup No This is a theoretical paper and does not describe an experimental setup with hyperparameters or training settings.