Non-stationary Online Convex Optimization with Arbitrary Delays
Authors: Yuanyu Wan, Chang Yao, Mingli Song, Lijun Zhang
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | To this end, we first propose a simple algorithm, namely DOGD, which performs a gradient descent step for each delayed gradient according to their arrival order. Despite its simplicity, our novel analysis shows that the dynamic regret of DOGD can be automatically bounded by O( d T(PT +1)) under mild assumptions... Furthermore, we develop an improved algorithm, which reduces those dynamic regret bounds achieved by DOGD to O( p d T(PT + 1)) and O( p d T(PT + 1)), respectively. The key idea is to run multiple DOGD with different learning rates, and utilize a meta-algorithm to track the best one based on their delayed performance. Finally, we demonstrate that our improved algorithm is optimal in the worst case by deriving a matching lower bound. |
| Researcher Affiliation | Academia | 1The State Key Laboratory of Blockchain and Data Security, Zhejiang University, Hangzhou, China 2School of Software Technology, Zhejiang University, Ningbo, China 3Hangzhou High Tech Zone (Binjiang) Institute of Blockchain and Data Security, Hangzhou, China 4National Key Laboratory for Novel Software Technology, Nanjing University, Nanjing, China. Correspondence to: Yuanyu Wan <wanyy@zju.edu.cn>. |
| Pseudocode | Yes | Algorithm 1 DOGD, Algorithm 2 Mild-OGD: Meta-algorithm, Algorithm 3 Mild-OGD: Expert-algorithm |
| Open Source Code | No | This information is not sufficient. The paper does not provide any statement or link regarding the availability of open-source code for the described methodology. |
| Open Datasets | No | This information is not sufficient. The paper is theoretical and does not conduct experiments involving datasets. |
| Dataset Splits | No | This information is not sufficient. The paper is theoretical and does not discuss dataset splits for training, validation, or testing. |
| Hardware Specification | No | This information is not sufficient. The paper is theoretical and does not report on empirical experiments, thus no hardware specifications are mentioned. |
| Software Dependencies | No | This information is not sufficient. The paper is theoretical and does not mention any specific software dependencies with version numbers for reproducibility. |
| Experiment Setup | No | This information is not sufficient. The paper is theoretical and does not describe an experimental setup with specific hyperparameters or training configurations. |