The Complexity of Learning Approval-Based Multiwinner Voting Rules
Authors: Ioannis Caragiannis, Karl Fehrs4925-4932
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Theoretical | We study the PAC learnability of multiwinner voting, focusing on the class of approval-based committee scoring (ABCS) rules. These are voting rules applied on profiles with approval ballots, where each voter approves some of the candidates. According to ABCS rules, each committee of k candidates collects from each voter a score, that depends on the size of the voter s ballot and on the size of its intersection with the committee. Then, committees of maximum score are the winning ones. Our goal is to learn a target rule (i.e., to learn the corresponding scoring function) using information about the winning committees of a small number of sampled profiles. Despite the existence of exponentially many outcomes compared to single-winner elections, we show that the sample complexity is still low: a polynomial number of samples carries enough information for learning the target rule with high confidence and accuracy. Unfortunately, even simple tasks that need to be solved for learning from these samples are intractable. We prove that deciding whether there exists some ABCS rule that makes a given committee winning in a given profile is a computationally hard problem. Our results extend to the class of sequential Thiele rules, which have received attention due to their simplicity. |
| Researcher Affiliation | Academia | Ioannis Caragiannis, Karl Fehrs Department of Computer Science, Aarhus University, Abogade 34, 8200 Aarhus N, Denmark. {iannis, karl}@cs.au.dk |
| Pseudocode | No | The paper describes algorithms and theoretical concepts through mathematical notation and prose but does not include any explicitly labeled pseudocode blocks or algorithms. |
| Open Source Code | No | The paper does not contain any statement about releasing open-source code for the methodology described, nor does it provide a link to a code repository. |
| Open Datasets | No | This paper is theoretical and does not involve empirical experiments with datasets. Therefore, it does not mention or provide access to any training datasets. |
| Dataset Splits | No | This paper is theoretical and does not involve empirical experiments with datasets. Therefore, it does not mention or specify any training, validation, or test dataset splits. |
| Hardware Specification | No | The paper is theoretical and does not describe any experiments that would require specific hardware. Therefore, no hardware specifications are mentioned. |
| Software Dependencies | No | The paper is theoretical and does not describe any experimental implementation that would require specific software dependencies with version numbers. |
| Experiment Setup | No | The paper is theoretical and does not describe any empirical experiments or their setup. Therefore, it does not provide details on hyperparameters or system-level training settings. |