Learning Predictions for Algorithms with Predictions

Authors: Misha Khodak, Maria-Florina F. Balcan, Ameet Talwalkar, Sergei Vassilvitskii

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We introduce a general design approach for algorithms that learn predictors: (1) identify a functional dependence of the performance measure on the prediction quality and (2) apply techniques from online learning to learn predictors, tune robustness-consistency trade-offs, and bound the sample complexity. We bridge this gap and provide a framework for obtaining learning-theoretic guarantees for algorithms with predictions.
Researcher Affiliation Collaboration Mikhail Khodak Carnegie Mellon University khodak@cmu.edu Maria-Florina Balcan Carnegie Mellon University ninamf@cs.cmu.edu Ameet Talwalkar Carnegie Mellon University talwalkar@cmu.edu Sergei Vassilvitskii Google Research New York sergeiv@google.com
Pseudocode Yes Algorithm 1: Generic application of an online learning algorithm over X to learn a predictor for a method Algorithm With Prediction that takes advice from X and returns upper bounds Ut on its cost. The goal of Online Algorithm is low regret over the sequence Ut, so that on-average Ct is upper bounded by the smallest possible average of Ut, up to some error decreasing in T. For specific instantiations of algorithms and feedback see Table 1. initialize x1 X for Online Algorithm for instance t = 1, . . . , T do obtain instance It run Algorithm With Prediction(It, xt) suffer cost Ct(xt) Ut(xt) get feedback to construct upper bound Ut xt+1 Online Algorithm({Ut}t s=1, x1)
Open Source Code No The paper explicitly marks questions regarding code and data availability as 'N/A' in its ethics checklist, and no explicit statements or links for open-source code are provided.
Open Datasets No The paper is theoretical and does not utilize or provide access to a concrete dataset for training purposes. It refers to 'samples from D' in theoretical contexts but not as a publicly accessible dataset.
Dataset Splits No As a theoretical paper, it does not describe experimental procedures involving training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not mention any specific hardware specifications used for experiments or computations. The ethics checklist marks this section as 'N/A'.
Software Dependencies No The paper is theoretical and does not specify any software dependencies with version numbers (e.g., specific libraries, frameworks, or solvers).
Experiment Setup No The paper is theoretical and does not include details on experimental setup, hyperparameters, or specific training configurations.