Optimal Rates for Random Order Online Optimization

Authors: Uri Sherman, Tomer Koren, Yishay Mansour

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present and analyze two algorithms for random order online optimization, the first of which obtains the optimal regret up to additive factors. Our analysis relies on novel connections between algorithmic stability and generalization for sampling without-replacement analogous to those studied in the with-replacement i.i.d. setting, as well as on a refined average stability analysis of stochastic gradient descent.
Researcher Affiliation Collaboration Uri Sherman Blavatnik School of Computer Science Tel Aviv University urisherman@mail.tau.ac.il. Tomer Koren Blavatnik School of Computer Science Tel Aviv University, and Google Research tkoren@tauex.tau.ac.il. Yishay Mansour Blavatnik School of Computer Science Tel Aviv University, and Google Research mansour.yishay@gmail.com.
Pseudocode Yes Algorithm 1 Reservoir SGD
Open Source Code No The paper does not provide any statement or link regarding the public availability of its source code.
Open Datasets No The paper is theoretical and discusses a set of datapoints Z= {ζ1, . . . , ζT} without referring to a specific, publicly available dataset used for training or empirical evaluation.
Dataset Splits No The paper is theoretical and does not describe any specific training, validation, or test dataset splits.
Hardware Specification No The paper does not provide specific details about the hardware used for experiments, as it is a theoretical work.
Software Dependencies No The paper is theoretical and does not list any specific software dependencies with version numbers required to replicate experiments.
Experiment Setup No The paper is theoretical and does not provide specific details about experimental setup, hyperparameters, or training configurations.