Proportional Representation in Metric Spaces and Low-Distortion Committee Selection

Authors: Yusuf Kalayci, David Kempe, Vikram Kher

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our main result is that the EXPANDING APPROVALS RULE of Aziz and Lee is (α, γ) representative with γ 1 + 6.71 α α 1. Our results lead to three notable byproducts. First, we show that the EXPANDING APPROVALS RULE achieves constant proportional fairness in the ordinal model, giving the first positive result on metric proportional fairness with ordinal information. Second, we show that for the core fairness objective, the EXPANDING APPROVALS RULE achieves the same asymptotic tradeoff between resource augmentation and approximation as the recent results of Li et al., which used full knowledge of the metric. Finally, our results imply a very simple single-winner voting rule with metric distortion at most 44.
Researcher Affiliation Academia Yusuf Kalayci1, David Kempe1, Vikram Kher2 1University of Southern California 2Yale University * kalayci@usc.edu, David.M.Kempe@Gmail.com, vikram.kher@yale.edu
Pseudocode Yes Algorithm 1: EXPANDING APPROVALS RULE Input: Election (V, C, V ), Committee Size k Output: Committee R Let U V be the set of uncovered voters. Let R be the selected committee. Let Nc for all c C. for τ = 1, . . . , m do for v V in arbitrary order do if v U then Let c = π 1 v (τ). if c / R then Let Nc Nc {v}. if |Nc| = n/k then R R {c}. Nc Nc \ Nc for all c C \ R. U U \ Nc. We say that Nc has been covered by c. if |R| < k then add k |R| arbitrary candidates to R.
Open Source Code No The paper does not contain any explicit statements or links indicating that the source code for the described methodology or algorithms is publicly available. While it mentions 'open_source_code' as a concept in related work, it does not provide access to its own code.
Open Datasets No The paper is theoretical and presents mathematical concepts and algorithms; it does not involve empirical training on specific datasets. Thus, there is no mention of publicly available datasets for training purposes.
Dataset Splits No The paper is theoretical and does not involve empirical evaluation or dataset splitting for training, validation, or testing. Therefore, no specific dataset split information for validation is provided.
Hardware Specification No The paper is purely theoretical, focusing on mathematical definitions, algorithms, and proofs. It does not describe any computational experiments that would require specific hardware, thus no hardware specifications are provided.
Software Dependencies No The paper is theoretical and outlines algorithms and proofs, not a specific software implementation. Consequently, it does not list any software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not involve empirical experiments with specific setups, hyperparameters, or training configurations. Therefore, no such details are provided.