Truthful and Near-Optimal Mechanisms for Welfare Maximization in Multi-Winner Elections

Authors: Umang Bhaskar, Varsha Dani, Abheek Ghosh

AAAI 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We show that for a particular input format called threshold approval voting, where each agent is presented with an independently chosen threshold, there is a mechanism with nearly optimal distortion when the number of voters is large. Threshold mechanisms are potentially manipulable, but place a low informational burden on voters. We then consider truthful mechanisms. For the widelystudied class of ordinal mechanisms which elicit the rankings of candidates from each agent, we show that truthfulness essentially imposes no additional loss of welfare. We give truthful mechanisms with distortion O( m log m) for k-winner elections, and distortion O( m log m) when candidates have arbitrary costs, in elections with m candidates. These nearly match known lower bounds for ordinal mechanisms that ignore the strategic behavior. We further tighten these lower bounds and show that for truthful mechanisms our first upper bound is tight. Lastly, when agents decide between two candidates, we give tight bounds on the distortion for truthful mechanisms.
Researcher Affiliation Academia Umang Bhaskar Tata Institute of Fundamental Research, Mumbai, India 400 005; Varsha Dani University of New Mexico, Albuquerque, NM 87131, USA; Abheek Ghosh Indian Institute of Technology, Guwahati, India 781039
Pseudocode Yes Mechanism 1 INDEPENDENT-THRESHOLDS; Mechanism 2 I.I.D.-THRESHOLDS-AND-CANDIDATES; Mechanism 3 k-WINNER-SELECTION; Mechanism 4 TRUTHFUL-RANKING-BY-VALUE
Open Source Code No The paper does not provide any statement about open-sourcing code or include links to code repositories.
Open Datasets No The paper is theoretical and does not use or refer to any publicly available or open datasets for training.
Dataset Splits No The paper is theoretical and does not describe any training, validation, or test dataset splits.
Hardware Specification No The paper is theoretical and does not mention any hardware specifications used for experiments.
Software Dependencies No The paper is theoretical and does not mention any specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not describe any experimental setup details such as hyperparameters or training settings.