Sample Complexity of Uniform Convergence for Multicalibration

Authors: Eliran Shabat, Lee Cohen, Yishay Mansour

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main results in this work are sample bounds that guarantee uniform convergence of a given class of predictors. We start by deriving a sample bound for the case of a finite hypothesis class, and derive a sample complexity bound which is logarithmic in the size of the hypothesis class. Later, for an infinite hypothesis class, we derive a sample bound that depends on the graph dimension of the class (which is an extension of the VC dimension for multiclass predictions). Finally, we derive a lower bound on the sample size required.
Researcher Affiliation Collaboration Eliran Shabat Tel Aviv University shabat.eliran@gmail.com Lee Cohen Tel Aviv University leecohencs@gmail.com Yishay Mansour Tel Aviv University and Google Research mansour.yishay@gmail.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described. It is a theoretical paper focused on sample complexity bounds.
Open Datasets No The paper is theoretical and does not report on experiments with datasets, thus no information on public dataset availability is provided.
Dataset Splits No The paper is theoretical and does not describe experimental setups or dataset splits.
Hardware Specification No The paper is theoretical and does not report on experiments requiring specific hardware specifications.
Software Dependencies No The paper is theoretical and does not describe an implementation that would require specific ancillary software details or version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details or hyperparameters.