Swap Agnostic Learning, or Characterizing Omniprediction via Multicalibration

Authors: Parikshit Gopalan, Michael Kim, Omer Reingold

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We introduce and study Swap Agnostic Learning. The problem can be phrased as a game between a predictor and an adversary: first, the predictor selects a hypothesis h; then, the adversary plays in response, and for each level set of the predictor {x X : h(x) = v} selects a loss-minimizing hypothesis cv C; the predictor wins if p competes with the adaptive adversary s loss. Despite the strength of the adversary, our main result demonstrates the feasibility Swap Agnostic Learning for any convex loss. Somewhat surprisingly, the result follows by proving an equivalence between Swap Agnostic Learning and swap variants of the recent notions Omniprediction [15] and Multicalibration [20]. Beyond this equivalence, we establish further connections to the literature on Outcome Indistinguishability [6, 14], revealing a unified notion of OI that captures all existing notions of omniprediction and multicalibration.
Researcher Affiliation Collaboration Parikshit Gopalan Apple, Inc. Michael P. Kim UC Berkeley Omer Reingold Stanford University
Pseudocode Yes Algorithm 1 Swap Agnostic Learning via MCBoost Algorithm 2 MCBoost
Open Source Code No The paper does not contain any explicit statement about releasing source code or provide a link to a code repository for the methodology described.
Open Datasets No The paper is theoretical and does not describe or refer to any specific dataset used for training or its public availability for its own research.
Dataset Splits No The paper is theoretical and does not describe training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not mention any specific hardware used for running experiments.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies or their versions required to reproduce the work.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training configurations.