Preference Elicitation For Participatory Budgeting

Authors: Gerdus Benade, Swaprava Nath, Ariel Procaccia, Nisarg Shah

AAAI 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We analytically compare four preference elicitation methods knapsack votes, rankings by value or value for money, and threshold approval votes through the lens of implicit utilitarian voting, and find that threshold approval votes are qualitatively superior. This conclusion is supported by experiments using data from real participatory budgeting elections.
Researcher Affiliation Academia Gerdus Benade Carnegie Mellon University jbenade@andrew.cmu.edu Swaprava Nath Carnegie Mellon University swapravn@cs.cmu.edu Ariel D. Procaccia Carnegie Mellon University arielpro@cs.cmu.edu Nisarg Shah Harvard University nisarg@g.harvard.edu
Pseudocode No The paper describes algorithms (Mechanism A, B, C) in text, but does not include structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide a concrete statement or link for open-source code for the described methodology. It only states that the authors were thanked for "generously sharing their data".
Open Datasets No The paper states: "We use data from participatory budgeting elections held in 2015 and 2016 in Boston, Massachusetts." While it names the dataset source, it does not provide concrete access information (link, DOI, repository, or formal citation for the dataset itself) for it to be considered publicly available according to the schema definition.
Dataset Splits No The paper describes its experimental setup including drawing sub-profiles and random utility profiles, but does not specify standard training/validation/test dataset splits with percentages or counts. It mentions using a "holdout set" for learning the optimal threshold value, but this is not a detailed, general validation split of the main dataset.
Hardware Specification Yes The experiments used the Boston 2016 dataset with 10 alternatives, and were run on an 8-core Intel(R) Xeon(R) CPU with 2.27GHz processor speed and 50GB main memory.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes For each dataset, we conduct 3 independent trials. In each trial, we create r sub-profiles, each consisting of n voters drawn at random from the population. For each sub-profile, we draw k random utility profiles v consistent with the subprofile... The choices of parameters (r, n, k) for the three trials are (5, 10, 10), (8, 7, 10), and (10, 5, 10)... we learn the optimal threshold value based on a holdout set that is not subsequently used.