Universal Online Learning with Gradient Variations: A Multi-layer Online Ensemble Approach

Authors: Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we propose an online convex optimization approach with two different levels of adaptivity. ... Specifically, we obtain O(log VT ), O(d log VT ) and b O( VT ) regret bounds for strongly convex, exp-concave and convex loss functions, respectively... Our approach is based on a multi-layer online ensemble framework incorporating novel ingredients... Our first contribution is proposing a multi-layer online ensemble approach... The second contribution arises from efficiency... We provide the regret guarantee below, which achieves the same guarantees as Theorem 1 with only one gradient per round. The proof is in Appendix C.2. Theorem 2. Under Assumptions 1 and 2, efficient Algorithm 1 enjoys O(log VT ), O(d log VT ) and b O( VT ) for strongly convex, exp-concave and convex functions, using only one gradient per round.
Researcher Affiliation Academia Yu-Hu Yan, Peng Zhao, Zhi-Hua Zhou National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing 210023, China {yanyh, zhaop, zhouzh}@lamda.nju.edu.cn
Pseudocode Yes Algorithm 1 Universal OCO with Gradient-variation Guarantees
Open Source Code No The paper does not contain any explicit statements or links indicating that open-source code for the described methodology is provided.
Open Datasets No The paper is theoretical and focuses on algorithm design and regret bounds for online convex optimization. It does not conduct empirical experiments with datasets; therefore, it does not mention specific datasets used for training or their public availability.
Dataset Splits No The paper is theoretical and focuses on algorithm design and regret bounds. It does not involve empirical experiments with data, and thus no training/test/validation dataset splits are discussed.
Hardware Specification No The paper is purely theoretical, focusing on mathematical proofs, algorithm design, and regret bounds for online convex optimization. It does not describe any empirical experiments, and therefore, it does not specify any hardware used for computations.
Software Dependencies No The paper is theoretical, presenting algorithms and mathematical proofs for online convex optimization. It does not describe any empirical experiments or implementations, and therefore, it does not list any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and mathematical analysis. It does not describe empirical experiments, and therefore, it does not provide details on experimental setup such as hyperparameters or system-level training settings.