Manipulation-Robust Selection of Citizens’ Assemblies

Authors: Bailey Flanigan, Jennifer Liang, Ariel D. Procaccia, Sven Wang

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental These theoretical results are confirmed via experiments in eight real-world datasets. and Our empirical results closely track our theory, showing that Leximin and Nash Welfare suffer high manipulability even as n grows, while the manipulability of the ℓ2 and ℓ norms declines quickly.
Researcher Affiliation Academia Bailey Flanigan1, Jennifer Liang2, Ariel D. Procaccia2, Sven Wang3 1 Carnegie Mellon University 2 Harvard University 3 Massachusetts Institute of Technology
Pseudocode No The paper describes the steps of its algorithms in prose but does not provide any structured pseudocode or clearly labeled algorithm blocks.
Open Source Code No The paper does not provide a specific statement or link indicating that the source code for the methodology described in this paper is publicly available.
Open Datasets No The eight real-world datasets were shared with us by groups of citizens assembly organizers. sf(a)-sf(e) were shared by the Sortition Foundation (UK); cca was shared by the Center for Blue Democracy (US); hd by Healthy Democracy (US); and newd by New Democracy (Australia). None of these datasets contain individually-identifying information.
Dataset Splits No The paper does not specify any training, validation, or test splits. It states that the pool size was increased by copying the existing pool for experimental purposes, but does not describe data partitioning for model training or evaluation in the context of typical machine learning datasets.
Hardware Specification No The paper does not provide any specific hardware details such as GPU/CPU models, processor types, or memory used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library or solver names with version numbers) needed to replicate the experiment.
Experiment Setup No The paper describes the strategies used for testing manipulability (OPT-1, MU, HP) and how the pool size was varied ('copy the pool, leaving p and k fixed'). However, it does not provide specific experimental setup details such as hyperparameter values, training configurations, or system-level settings, as the paper primarily evaluates selection algorithms rather than training machine learning models with such parameters.