Measuring and Controlling Divisiveness in Rank Aggregation

Authors: Rachael Colley, Umberto Grandi, César Hidalgo, Mariana Macedo, Carlos Navarrete

IJCAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We give a theoretical and experimental analysis of our divisiveness measures by relating them to other notions and giving bounds on their limit cases (Section 3). Importantly, we show that our measures can distinguish between key profiles which other measures cannot. We then inspect two aspects of control, first, by studying the effect of removing pairwise comparisons from the agent s rankings (Section 4.2 ) and second, by adding additional controlled agents (Section 4.2). All our code is available at https://github.com/Center For Collective Learning/ divisiveness-theoretical-IJCAI2023. We conducted experiments on synthetic preference profiles to test whether divisiveness with α = 0 correlates with the rank-variance defined in Section 2.3.
Researcher Affiliation Academia IRIT, Universit e Toulouse Capitole, France 2Center for Collective Learning, ANITI, TSE, IAST, IRIT, Universit e de Toulouse, France 3Alliance Manchester Business School, University of Manchester, UK 4Center for Collective Learning, CIAS, Corvinus University, Hungary
Pseudocode No The paper describes algorithms like INJECTs but does not present them in formal pseudocode blocks or clearly labeled algorithm boxes.
Open Source Code Yes All our code is available at https://github.com/Center For Collective Learning/ divisiveness-theoretical-IJCAI2023.
Open Datasets Yes Rankings were generated using the Pref Lib library [Mattei and Walsh, 2017, 2013]. We tested 100 profiles of rankings generated via the impartial culture (IC) and the Urn model with a correlation of 10% and 50% (named UM10 and UM50, respectively).
Dataset Splits No The paper describes generating and testing profiles but does not specify a validation set or explicit training/validation/test splits.
Hardware Specification No The paper does not specify any hardware details (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Pref Lib library' but does not specify its version number or any other software dependencies with their versions.
Experiment Setup Yes We tested 100 profiles of rankings generated via the impartial culture (IC) and the Urn model with a correlation of 10% and 50% (named UM10 and UM50, respectively). Using the Urn model with 10% correlation as an example, we plot in Figure 1 the average Kendall s tau correlation. We tested 100 profiles for each of the three preference generation methods IC, UM10, and UM50, varying the number of issues m [3, 18]. For each m [2, 11] and each of three profile generation methods (IC, UM10, UM50), we considered 100 profiles to test how many new agents INJECTs required to make the target issue the most divisive. Figure 3 focuses on IC profiles with 8 issues. It shows the divisiveness rankings of the 8 issues in the initial profile (at 0%) and their evolution when INJECTBorda inserts additional rankings to make the least divisive issue the most divisive (the highlighted line at the 8th position). In particular, by adding around 35% new agents we can make the least divisive issue the most via our simple algorithm.