Notice: The reproducibility variables underlying each score are classified using an automated LLM-based pipeline, validated against a manually labeled dataset. LLM-based classification introduces uncertainty and potential bias; scores should be interpreted as estimates. Full accuracy metrics and methodology are described in [1].
Explainable and Efficient Randomized Voting Rules
Authors: Soroush Ebadian, Aris Filos-Ratsikas, Mohamad Latifian, Nisarg Shah
NeurIPS 2023 | Venue PDF | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We study the efficiency gains which can be unlocked by using voting rules that add a simple randomization step to a deterministic rule, thereby retaining explainability. We focus on two such families of rules, randomized positional scoring rules and random committee member rules, and show, theoretically and empirically, that they indeed achieve explainability and efficiency simultaneously to some extent. |
| Researcher Affiliation | Academia | Soroush Ebadian University of Toronto EMAIL Aris Filos-Ratsikas University of Edinburgh EMAIL Mohamad Latifian University of Toronto EMAIL Nisarg Shah University of Toronto EMAIL |
| Pseudocode | No | The paper mentions "Algorithm 1 in the supplementary material", indicating that pseudocode is not present in the main body of the paper. |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a specific repository link or an explicit statement of code release) for the source code of the methodology described. |
| Open Datasets | No | The paper states: "We generate preference profiles by sampling n rankings over m alternatives iid from the Mallows model [49]". This describes a method for synthetic data generation rather than providing access information for a fixed, publicly available dataset used in experiments. No specific link, DOI, repository, or citation to a public dataset with explicit access details is provided. |
| Dataset Splits | No | The paper describes generating instances for experiments (e.g., "For each combination of n = 100 agents, m 2 {5, 10, . . . , 50} alternatives, and dispersion parameter φ 2 {0, 0.1, . . . , 1}, we sample 150 instances"), but it does not specify explicit training, validation, or test dataset splits in the typical sense for a fixed dataset, as it uses synthetic data generation for evaluation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used to run the experiments (e.g., GPU/CPU models, memory, or cloud resources). |
| Software Dependencies | No | The paper does not provide specific version numbers for any software components, libraries, or solvers used in the experiments. |
| Experiment Setup | Yes | We generate preference profiles by sampling n rankings over m alternatives iid from the Mallows model [49]... For each combination of n = 100 agents, m 2 {5, 10, . . . , 50} alternatives, and dispersion parameter φ 2 {0, 0.1, . . . , 1}, we sample 150 instances, and report averages along with the standard error. |