Small-loss Adaptive Regret for Online Convex Optimization

Authors: Wenhao Yang, Wei Jiang, Yibo Wang, Ping Yang, Yao Hu, Lijun Zhang

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we first propose a novel algorithm that achieves a small-loss adaptive regret bound for exp-concave and smooth function. Subsequently, to address the limitation that existing algorithms can only handle one type of convex functions, we further design a universal algorithm capable of delivering small-loss adaptive regret bounds for general convex, exp-concave, and strongly convex functions simultaneously. That is challenging because the universal algorithm follows the metaexpert framework, and we need to ensure that upper bounds for both meta-regret and expert-regret are of small-loss types. Moreover, we provide a novel analysis demonstrating that our algorithms are also equipped with minimax adaptive regret bounds when functions are non-smooth.
Researcher Affiliation Collaboration 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China 2School of Artificial Intelligence, Nanjing University, Nanjing 210023, China 3Xiaohongshu Inc., Beijing, China.
Pseudocode Yes Algorithm 1 Follow-the-Leading-History for Smooth functions (FLHS) Algorithm 2 A Universal Algorithm for Exploiting Smoothness to Improve the Adaptive Regret (USIA) Algorithm 3 Expert Esp: Online Newton Step (ONS) Algorithm 4 Expert Esp: Smooth and Strongly Convex OGD (S2OGD) Algorithm 5 Expert Esp: Scale-free online gradient descent (SOGD) Algorithm 6 Expert-algorithms for USIA Algorithm 7 An Improved Implementation of USIA Algorithm 8 Expert-algorithms for improved USIA
Open Source Code No The paper does not contain any statement or link indicating that source code for the described methodology is publicly available.
Open Datasets No The paper is theoretical and focuses on algorithm design and theoretical bounds for online convex optimization. It does not use or mention any datasets for training.
Dataset Splits No This paper is theoretical and does not conduct experiments involving dataset splits for training, validation, or testing.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not provide details about experimental setup, hyperparameters, or training configurations.