Efficient Online Linear Optimization with Approximation Algorithms

Authors: Dan Garber

NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We present new algorithms with significantly improved oracle complexity for both the full information and bandit variants of the problem. Mainly, for both variants, we present -regret bounds of O(T 1/3), were T is the number of prediction rounds, using only O(log(T)) calls to the approximation oracle per iteration, on average. These are the first results to obtain both average oracle complexity of O(log(T)) (or even poly-logarithmic in T) and -regret bound O(T c) for a constant c > 0, for both variants.
Researcher Affiliation Academia Dan Garber Technion Israel Institute of Technology dangar@technion.ac.il
Pseudocode Yes Algorithm 1 Online Gradient Descent Without Feasibility
Open Source Code No The paper does not provide any statements or links indicating the availability of open-source code for the described methodology.
Open Datasets No The paper is theoretical and does not use or describe any specific datasets for training or evaluation, nor does it provide any access information for such datasets.
Dataset Splits No The paper is theoretical and does not discuss dataset splits (training, validation, test) as it does not involve empirical experiments with datasets.
Hardware Specification No The paper is theoretical and does not describe any specific hardware used for running experiments.
Software Dependencies No The paper is theoretical and focuses on algorithms and proofs, without mentioning specific software dependencies or their version numbers for implementation.
Experiment Setup No The paper is theoretical and does not provide details about an experimental setup, such as hyperparameters or system-level training settings.