Adapting to Misspecification in Contextual Bandits
Authors: Dylan J. Foster, Claudio Gentile, Mehryar Mohri, Julian Zimmert
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | A major research direction in contextual bandits is to develop algorithms that are computationally efficient, yet support flexible, general-purpose function approximation. ... We introduce a new family of oracle-efficient algorithms for ε-misspecified contextual bandits that adapt to unknown model misspecification both for finite and infinite action settings. Given access to an online oracle for square loss regression, our algorithm attains optimal regret and in particular optimal dependence on the misspecification level, with no prior knowledge. Specializing to linear contextual bandits with infinite actions in d dimensions, we obtain the first algorithm that achieves the optimal O(d d T) regret bound for unknown ε. On a conceptual level, our results are enabled by a new optimization-based perspective on the regression oracle reduction framework of Foster and Rakhlin [20], which we believe will be useful more broadly. |
| Researcher Affiliation | Collaboration | Dylan J. Foster dylanf@mit.edu Claudio Gentile cgentile@google.com Mehryar Mohri mohri@google.com Julian Zimmert zimmert@google.com Massachusetts Institute of Technology. Google Research. Courant Institute of Mathematical Sciences. |
| Pseudocode | Yes | Algorithm 1: Square CB [20] ... Algorithm 2: Square CB.Inf ... Algorithm 3: Corralling [9] ... Algorithm 4: Square CB.Imp (for base alg. m) |
| Open Source Code | No | The paper does not provide any statement about releasing open-source code for the described methodology. |
| Open Datasets | No | The paper is theoretical and does not describe experiments with specific datasets, nor does it provide concrete access information for any dataset used for training. |
| Dataset Splits | No | The paper is theoretical and does not describe empirical experiments with dataset splits, thus no validation split information is provided. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for computations or experiments. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers for its implementation or theoretical analysis. |
| Experiment Setup | No | The paper discusses algorithmic parameters such as learning rates and time horizons within its theoretical framework, but it does not provide concrete hyperparameter values or system-level settings for an empirical experimental setup. |