Lie on the Fly: Iterative Voting Center with Manipulative Voters

Authors: Lihi Naamani-Dery, Svetlana Obraztsova, Zinovi Rabinovich, Meir Kalech

IJCAI 2015 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We provide practical algorithms for manipulation under the Borda voting rule and evaluate those using different voting centers: the Careful voting center that tries to avoid manipulation and the Naive voting center. We show that in practice, manipulation happens in a low percentage of the settings and has a low impact on the final outcome. The Careful voting center reduces manipulation even further. Experiments were performed on the real-world Sushi dataset [Kamishima et al., 2005].
Researcher Affiliation Collaboration Lihi Naamani-Dery1, Svetlana Obraztsova2, Zinovi Rabinovich3, Meir Kalech4 1Ariel University, Ariel, Israel. lihid@ariel.ac.il 2Tel Aviv University, Tel Aviv, Israel. svetlana.obraztsova@gmail.com 3Mobileye Vision Technologies Ltd., Jerusalem, Israel. zr@zinovi.net 4Ben-Gurion University, Beer Sheva, Israel. kalech@bgu.ac.il
Pseudocode Yes Algorithm 1 Voter Manipulation function: Case-A; Algorithm 2 Voter Manipulation function: Case-B; Algorithm 3 Voter Manipulation function: Case-C
Open Source Code No The paper mentions that the 'ES algorithm found in [Naamani-Dery et al., 2015]' is publicly available, but this refers to previous work, not the code for the methodology or experimental setup described in this specific paper.
Open Datasets Yes Experiments were performed on the real-world Sushi dataset [Kamishima et al., 2005].
Dataset Splits No The paper describes the number of candidates and voters, and how experiments were conducted (e.g., '20 sets of random profiles were evaluated. For each set of profiles, the experiment was conducted 40 times'), but it does not specify explicit training, validation, and test dataset splits in the conventional sense for model training and evaluation.
Hardware Specification No The paper does not mention any specific hardware (e.g., CPU, GPU models) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for any software dependencies.
Experiment Setup Yes The amount of candidates was set to 10, which is the maximum amount of candidates in the Sushi dataset. The amount of voters was varied on a range of 10, 50, 100, 150, 200, 250. In order to conclude which algorithm performs best over multiple datasets, we followed a robust non-parametric procedure proposed by [Garc ıa et al., 2010]. We first used the Friedman Aligned Ranks test in order to reject the null hypothesis that all heuristics perform the same. This was followed by the Bonferroni-Dunn test to find whether one of the heuristics performs significantly better than other heuristics.