A Unified Model and Dimension for Interactive Estimation

Authors: Nataly Brukhim, Miro Dudik, Aldo Pacchiano, Robert E. Schapire

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We study an abstract framework for interactive learning called interactive estimation in which the goal is to estimate a target from its similarity to points queried by the learner. We introduce a combinatorial measure called dissimilarity dimension which is used to derive learnability bounds in our model. We present a simple, general, and broadly-applicable algorithm, for which we obtain both regret and PAC generalization bounds that are polynomial in the new dimension. We show that our framework subsumes and thereby unifies two classic learning models: statistical-query learning and structured bandits. We also delineate how the dissimilarity dimension is related to well-known parameters for both frameworks, in some cases yielding significantly improved analyses.
Researcher Affiliation Collaboration Nataly Brukhim Princeton University nbrukhim@princeton.edu Miro Dudik Microsoft Research mdudik@microsoft.com Aldo Pacchiano Broad Institute of MIT and Harvard & Boston University apacchia@broadinstitute.org Robert Schapire Microsoft Research schapire@microsoft.com
Pseudocode Yes Algorithm 1 Interactive Estimation via Least Squares; Algorithm 2 PAC Interactive Estimation; Algorithm 3 Optimistic Interactive Estimation via Least Squares; Algorithm 4 Interactive Estimation via Least Squares; Algorithm 5 Interactive Estimation for Structured Bandits; Algorithm 6 Optimistic Interactive Estimation for Structured Bandits
Open Source Code No The paper does not contain any explicit statements or links indicating the availability of open-source code for the described methodology.
Open Datasets No The paper describes theoretical models and gives examples, but does not use specific named datasets for empirical evaluation or provide access information for any dataset.
Dataset Splits No The paper is theoretical and does not involve empirical experiments with dataset splits.
Hardware Specification No The paper is theoretical and does not describe any empirical experiments, so no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe any empirical experiments, so no software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not describe any empirical experiments with specific hyperparameters or system-level training settings.