Differentially Private Condorcet Voting
Authors: Zhechen Li, Ao Liu, Lirong Xia, Yongzhi Cao, Hanpin Wang
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We prove that all of our rules satisfy absolute monotonicity, lexi-participation, probabilistic Pareto efficiency, approximate probabilistic Condorcet criterion, and approximate SD-strategyproofness. In addition, CMRR λ satisfies (non-approximate) probabilistic Condorcet criterion, while CMLAP λ and CMEXP λ satisfy strong lexi-participation. Finally, we regard differential privacy as a voting axiom, and discuss its relations to other axioms. |
| Researcher Affiliation | Academia | 1Key Laboratory of High Confidence Software Technologies (MOE), School of Computer Science, Peking University, China 2Department of Computer Science, Rensselaer Polytechnic Institute 3School of Computer Science and Cyber Engineering, Guangzhou University, China |
| Pseudocode | Yes | Mechanism 1: Randomized Condorcet Method |
| Open Source Code | No | The paper does not provide any concrete access information (e.g., a specific repository link, an explicit code release statement, or code in supplementary materials) for the methodology described. |
| Open Datasets | No | The paper is theoretical and does not use datasets for training. |
| Dataset Splits | No | The paper is theoretical and does not mention any training/test/validation dataset splits. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, processor types, memory amounts, or detailed computer specifications) used for running experiments, as it is a theoretical paper. |
| Software Dependencies | No | The paper does not provide specific ancillary software details (e.g., library or solver names with version numbers) needed to replicate the experiment. |
| Experiment Setup | No | The paper does not contain specific experimental setup details (concrete hyperparameter values, training configurations, or system-level settings) as it is a theoretical paper. |