Online Agnostic Boosting via Regret Minimization

Authors: Nataly Brukhim, Xinyi Chen, Elad Hazan, Shay Moran

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this work we provide the first agnostic online boosting algorithm; that is, given a weak learner with only marginally-better-than-trivial regret guarantees, our algorithm boosts it to a strong learner with sublinear regret. Our algorithm is based on an abstract (and simple) reduction to online convex optimization, which efficiently converts an arbitrary online convex optimizer to a boosting algorithm. Moreover, this reduction extends to the statistical as well as the online realizable settings, thus unifying the 4 cases of statistical/online and agnostic/realizable boosting.
Researcher Affiliation Collaboration Nataly Brukhim Google AI Princeton Princeton University Department of Computer Science nbrukhim@princeton.edu Xinyi Chen Google AI Princeton Princeton University Department of Computer Science xinyic@princeton.edu Elad Hazan Google AI Princeton Princeton University Department of Computer Science ehazan@princeton.edu Shay Moran Department of Mathematics Technion Israel Institute of Technology smoran@technion.ac.il
Pseudocode Yes Algorithm 1 Online Agnostic Boosting with OCO; Algorithm 2 Boosting with OCO; Algorithm 3 Online boosting with OCO.
Open Source Code No The paper does not provide any specific links to source code repositories or explicit statements about code availability.
Open Datasets No The paper is theoretical and focuses on algorithm design and proofs; it does not report empirical experiments using specific datasets.
Dataset Splits No The paper is theoretical and does not report empirical experiments. Therefore, there are no mentions of training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and focuses on algorithm design and proofs; it does not mention any specific hardware used for experiments.
Software Dependencies No The paper is theoretical and focuses on algorithm design and proofs; it does not list any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and proofs; it does not describe any experimental setup details such as hyperparameters or training configurations.