Improved Dynamic Regret for Non-degenerate Functions
Authors: Lijun Zhang, Tianbao Yang, Jinfeng Yi, Rong Jin, Zhi-Hua Zhou
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | In this paper, we illustrate that the dynamic regret can be further improved by allowing the learner to query the gradient of the function multiple times, and meanwhile the strong convexity can be weakened to other non-degenerate conditions. We then extend our theoretical guarantee to functions that are semi-strongly convex or selfconcordant. |
| Researcher Affiliation | Collaboration | National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China Department of Computer Science, The University of Iowa, Iowa City, USA AI Foundations Lab, IBM Thomas J. Watson Research Center, Yorktown Heights, NY, USA Alibaba Group, Seattle, USA |
| Pseudocode | Yes | Algorithm 1 Online Multiple Gradient Descent (OMGD) and Algorithm 2 Online Multiple Newton Update (OMNU) |
| Open Source Code | No | The paper does not contain any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not conduct experiments with datasets, so there is no mention of publicly available datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not conduct experiments, so there is no mention of training/test/validation dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not conduct experiments, so no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not discuss software implementations or dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe any experimental setup details or hyperparameters. |