Plurality Veto: A Simple Voting Rule Achieving Optimal Metric Distortion

Authors: Fatih Erdem Kizilkaya, David Kempe

IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main result is an extremely simple voting rule, called PLURALITYVETO, which achieves the same optimal distortion of 3. Each candidate starts with a score equal to his number of first-place votes. These scores are then gradually decreased via an nround veto process in which a candidate drops out when his score reaches zero. One after the other, voters decrement the score of their bottom choice among the standing candidates, and the last standing candidate wins. We give a one-paragraph proof that this voting rule achieves distortion 3. This rule is also immensely practical, and it only makes two queries to each voter, so it has low communication overhead. We also show that a straightforward extension can be used to give a constructive proof of the more general Ranking-Matching Lemma of Gkatzelis et al. We also generalize PLURALITYVETO into a class of randomized voting rules in the following way: PLURALITYVETO is run only for k < n rounds; then, a candidate is chosen with probability proportional to his residual score. This general rule interpolates between RANDOMDICTATORSHIP (for k = 0) and PLURALITYVETO (for k = n 1), and k controls the variance of the output. We show that for all k, this rule has expected distortion at most 3.
Researcher Affiliation Academia Fatih Erdem Kizilkaya and David Kempe University of Southern California
Pseudocode Yes Algorithm 1 PLURALITYVETO Input: An election E = (V, C, V ) Output: A winning candidate c C 1: initialize score(c) = plu(c) for each c C 2: let (v1, . . . , vn) be an arbitrary ordering of V 3: for i = 1, 2, . . . , n do 4: Ai = {c C : score(c) > 0} 5: ci = bottom Ai(vi) 6: decrement score(ci) by 1 7: return cn
Open Source Code No The paper does not provide any links to or statements about the availability of open-source code for the described methodology.
Open Datasets No The paper focuses on theoretical contributions and does not conduct empirical studies using datasets, therefore, there is no mention of training data.
Dataset Splits No The paper is theoretical and does not involve empirical validation on datasets, thus no validation splits are mentioned.
Hardware Specification No The paper is theoretical and does not describe any computational experiments, so no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe any computational experiments, thus no specific software dependencies with version numbers are listed.
Experiment Setup No The paper is theoretical and focuses on algorithm design and proofs, therefore it does not include details on experimental setup such as hyperparameters or training configurations.