Optimistic Bandit Convex Optimization

Authors: Scott Yang, Mehryar Mohri

NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We introduce the general and powerful scheme of predicting information re-use in optimization algorithms. This allows us to devise a computationally efficient algorithm for bandit convex optimization with new state-of-the-art guarantees for both Lipschitz loss functions and loss functions with Lipschitz gradients.
Researcher Affiliation Collaboration Mehryar Mohri Courant Institute and Google Scott Yang Courant Institute
Pseudocode Yes Figure 1: Pseudocode of OPTIMISTICBCO, with R: int(K) ! R, δ 2 (0, 1], > 0, k 2 Z, and x1 2 K.
Open Source Code No The paper does not provide any links to open-source code or explicitly state that code for the described methodology is being released.
Open Datasets No This paper is theoretical and does not use or refer to specific datasets for training, validation, or testing. Therefore, no concrete access information for a publicly available or open dataset is provided.
Dataset Splits No This paper is theoretical and does not involve empirical experiments with datasets that would require validation splits. No information about dataset splits was provided.
Hardware Specification No This is a theoretical paper and does not describe any experiments that would require hardware specifications. No hardware details were mentioned.
Software Dependencies No This is a theoretical paper and does not describe any experiments that would require software dependencies with version numbers. No such details were mentioned.
Experiment Setup No This is a theoretical paper and does not describe any experiments that would involve hyperparameter tuning or specific training setups. No such details were mentioned.