Strongly Adaptive Online Learning

Authors: Amit Daniely, Alon Gonen, Shai Shalev-Shwartz

ICML 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Strongly adaptive algorithms are algorithms whose performance on every time interval is close to optimal. We present a reduction that can transform standard low-regret algorithms to strongly adaptive. As a consequence, we derive simple, yet efficient, strongly adaptive algorithms for a handful of problems.In this section we sketch the proof of Theorem 1. A full proof is detailed in Appendix 1. The analysis of SAOL is divided into two parts. The first challenge is to prove the theorem for the intervals in I (see Lemma 2). Then, the theorem should be extended to any interval (end of Appendix 1).
Researcher Affiliation Academia Amit Daniely AMIT.DANIELY@MAIL.HUJI.AC.IL Alon Gonen ALONGNN@CS.HUJI.AC.IL Shai Shalev-Shwartz SHAIS@CS.HUJI.AC.IL The Hebrew University
Pseudocode Yes Algorithm 1 Strongly Adaptive Online Learner (with blackbox algorithm B)
Open Source Code No The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available.
Open Datasets No The paper focuses on theoretical contributions and does not describe experiments using specific datasets, thus no information about publicly available datasets used for training is provided.
Dataset Splits No The paper is theoretical and does not conduct empirical experiments with dataset splits, so no validation split information is provided.
Hardware Specification No The paper is theoretical and does not describe any experimental setup or hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers for experimental reproducibility.
Experiment Setup No The paper is theoretical and focuses on algorithm design and analysis, therefore it does not describe an experimental setup with specific hyperparameters or training configurations.