Now We’re Talking: Better Deliberation Groups through Submodular Optimization
Authors: Jake Barrett, Kobi Gal, Paul Gölz, Rose M. Hong, Ariel D. Procaccia
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments with data from real citizens assemblies demonstrate that our approach substantially outperforms the heuristic algorithm currently used by practitioners. |
| Researcher Affiliation | Academia | Jake Barrett1, Kobi Gal1,2, Paul G olz3, Rose M. Hong3, and Ariel D. Procaccia3 1University of Edinburgh, 2Ben-Gurion University of the Negev, 3Harvard University |
| Pseudocode | Yes | Algorithm 1: SIMAPPROX 2 for t = 0, 1, . . . , T 1 do 3 p (t/T) (1 + log2 T) 4 Z Z + n argmax S Z ˆf 2p Z + {S} o |
| Open Source Code | Yes | Our implementation is open source at https://github.com/rosemhong/tables. |
| Open Datasets | No | The paper mentions using 'seven datasets, each based on data from a real citizens assembly' and deriving some from 'assembly-selection data used by Flanigan et al. (2021a)', but does not provide direct public access information (URL, DOI) for the specific datasets used in its experiments, nor does it refer to them as well-known public datasets that are universally accessible without further information. |
| Dataset Splits | No | The paper does not provide specific dataset split information (exact percentages, sample counts, or citations to predefined splits) for training, validation, or testing. |
| Hardware Specification | Yes | To compute experiments in parallel, we run them on an AWS EC2 C5 instance with a 3.6 GHz processor, 16 threads, and 32 GB of RAM. |
| Software Dependencies | No | The paper states 'We have implemented all algorithms in this work in Python, using Gurobi as our ILP solver.' However, it does not specify version numbers for Python or Gurobi, which are required for full reproducibility. |
| Experiment Setup | Yes | To accommodate one outlier and to be safe, we set the optimization timeout to 120 seconds from here on, for the ILP calls both in the greedy algorithm and in SIMAPPROX. |