Anytime Online-to-Batch, Optimism and Acceleration

Authors: Ashok Cutkosky

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We close this gap by introducing a black-box modification to any online learning algorithm whose iterates converge to the optimum in stochastic scenarios. We then consider the case of smooth losses, and show that combining our approach with optimistic online learning algorithms immediately yields a fast convergence rate of O(L/T 3/2 + σ/T) on L-smooth problems with σ2 variance in the gradients. Finally, we provide a reduction that converts any adaptive online algorithm into one that obtains the optimal accelerated rate of O(L/T 2 + σ/T), while still maintaining O(1/T) convergence in the nonsmooth setting. Algorithm 1 Anytime Online-to-Batch. Theorem 1. Suppose g1, . . . , gt satisfy E[gt|xt] L(xt) for some function L and gt is independent of all other quantities given xt.
Researcher Affiliation Industry 1Google Research, California, USA. Correspondence to: Ashok Cutkosky <ashok@cutkosky.com>.
Pseudocode Yes Algorithm 1 Anytime Online-to-Batch
Open Source Code No The paper is theoretical and does not mention any specific release of source code for the methodology described.
Open Datasets No The paper is theoretical and does not describe any experiments or use any specific datasets.
Dataset Splits No The paper is theoretical and does not describe any experiments or dataset splits.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for experiments or computations.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details or hyperparameters.