Consistency of weighted majority votes

Authors: Daniel Berend, Aryeh Kontorovich

NeurIPS 2014 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We revisit from a statistical learning perspective the classical decision-theoretic problem of weighted expert voting. In particular, we examine the consistency (both asymptotic and finitary) of the optimal Nitzan-Paroush weighted majority and related rules. In the case of known expert competence levels, we give sharp error estimates for the optimal rule. When the competence levels are unknown, they must be empirically estimated. We provide frequentist and Bayesian analyses for this situation. Some of our proof techniques are non-standard and may be of independent interest. The bounds we derive are nearly optimal, and several challenging open problems are posed.
Researcher Affiliation Academia Daniel Berend Computer Science Department and Mathematics Department Ben Gurion University Beer Sheva, Israel berend@cs.bgu.ac.il Aryeh Kontorovich Computer Science Department Ben Gurion University Beer Sheva, Israel karyeh@cs.bgu.ac.il
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not provide concrete access to source code for the methodology described.
Open Datasets No The paper is theoretical and does not describe experiments using a specific dataset, nor does it mention dataset availability, citations, or splits for training purposes.
Dataset Splits No The paper is theoretical and does not mention dataset splits for training, validation, or testing.
Hardware Specification No The paper does not provide specific hardware details used for running experiments, as it is a theoretical paper without empirical experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers.
Experiment Setup No The paper is theoretical and does not include details about an experimental setup, such as hyperparameters or training configurations.