Explainable and Efficient Randomized Voting Rules
Authors: Soroush Ebadian, Aris Filos-Ratsikas, Mohamad Latifian, Nisarg Shah
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study the efficiency gains which can be unlocked by using voting rules that add a simple randomization step to a deterministic rule, thereby retaining explainability. We focus on two such families of rules, randomized positional scoring rules and random committee member rules, and show, theoretically and empirically, that they indeed achieve explainability and efficiency simultaneously to some extent. |
| Researcher Affiliation | Academia | Soroush Ebadian University of Toronto soroush@cs.toronto.edu Aris Filos-Ratsikas University of Edinburgh Aris.Filos-Ratsikas@ed.ac.uk Mohamad Latifian University of Toronto latifian@cs.toronto.edu Nisarg Shah University of Toronto nisarg@cs.toronto.edu |
| Pseudocode | No | The paper mentions "Algorithm 1 in the supplementary material", indicating that pseudocode is not present in the main body of the paper. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a specific repository link or an explicit statement of code release) for the source code of the methodology described. |
| Open Datasets | No | The paper states: "We generate preference profiles by sampling n rankings over m alternatives iid from the Mallows model [49]". This describes a method for synthetic data generation rather than providing access information for a fixed, publicly available dataset used in experiments. No specific link, DOI, repository, or citation to a public dataset with explicit access details is provided. |
| Dataset Splits | No | The paper describes generating instances for experiments (e.g., "For each combination of n = 100 agents, m 2 {5, 10, . . . , 50} alternatives, and dispersion parameter φ 2 {0, 0.1, . . . , 1}, we sample 150 instances"), but it does not specify explicit training, validation, or test dataset splits in the typical sense for a fixed dataset, as it uses synthetic data generation for evaluation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory, or cloud resources). |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or solvers used in the experiments. |
| Experiment Setup | Yes | We generate preference profiles by sampling n rankings over m alternatives iid from the Mallows model [49]... For each combination of n = 100 agents, m 2 {5, 10, . . . , 50} alternatives, and dispersion parameter φ 2 {0, 0.1, . . . , 1}, we sample 150 instances, and report averages along with the standard error. |