Adaptive Regret of Convex and Smooth Functions
Authors: Lijun Zhang, Tie-Yan Liu, Zhi-Hua Zhou
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We investigate online convex optimization in changing environments, and choose the adaptive regret as the performance measure. The goal is to achieve a small regret over every interval so that the comparator is allowed to change over time. Different from previous works that only utilize the convexity condition, this paper further exploits smoothness to improve the adaptive regret. To this end, we develop novel adaptive algorithms for convex and smooth functions, and establish problem-dependent regret bounds over any interval. |
| Researcher Affiliation | Collaboration | 1National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China 2Microsoft Research Asia, Beijing, China. |
| Pseudocode | Yes | Algorithm 1 Scale-free online gradient descent (SOGD), Algorithm 2 Strongly Adaptive algorithm for Convex and Smooth functions (SACS), Algorithm 3 SACS with CPGC intervals |
| Open Source Code | No | The paper does not contain any statement or link indicating the availability of open-source code for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not describe experiments using datasets; therefore, it provides no information about public dataset availability or access. |
| Dataset Splits | No | The paper is theoretical and does not describe experiments; therefore, no information on training/validation/test dataset splits is provided. |
| Hardware Specification | No | The paper is theoretical and does not describe experiments; therefore, no specific hardware details used for running experiments are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe experiments; therefore, no specific software dependencies with version numbers are mentioned. |
| Experiment Setup | No | The paper is theoretical and focuses on algorithm design and theoretical guarantees, thus it does not provide specific experimental setup details such as hyperparameters or training configurations. |