Combining Fairness and Optimality when Selecting and Allocating Projects
Authors: Khaled Belahcène, Vincent Mousseau, Anaëlle Wilczynski
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In this section, we empirically evaluate the quality of different procedures for assigning projects to agents, both according to selection and allocation considerations. We generate 100 SA instances with n = 5 agents and m = 15 projects. Correlated preferences are studied via the single-peaked preference domain. A preference ranking i is single-peaked if there exists a linear order > over P such that for all projects x, y, z with x > y > z or z > y > x, x i y implies y i z. We consider three types of preference generation: impartial culture (IC) where each preference ranking is uniformly drawn from the set of all possible preference rankings, single-peaked uniform peak (SP-UP) where each single-peaked preference ranking is generated by first uniformly selecting the peak project then uniformly choosing the next ranked project either on the left of the peak in axis > or on the right and so on [Conitzer, 2009], and single-peaked uniform (SP-U) where each preference ranking is uniformly drawn from the set of all possible single-peaked rankings [Walsh, 2015]. Our results are given in Figure 1. |
| Researcher Affiliation | Academia | Khaled Belahc ene1 , Vincent Mousseau2 and Ana elle Wilczynski2 1Universit e de technologie de Compi egne, CNRS, Heudiasyc 2MICS, Centrale Sup elec, Universit e Paris-Saclay |
| Pseudocode | No | The paper describes concepts and algorithms but does not include explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | No | We generate 100 SA instances with n = 5 agents and m = 15 projects. Correlated preferences are studied via the single-peaked preference domain. We consider three types of preference generation: impartial culture (IC) where each preference ranking is uniformly drawn from the set of all possible preference rankings, single-peaked uniform peak (SP-UP) where each single-peaked preference ranking is generated by first uniformly selecting the peak project then uniformly choosing the next ranked project either on the left of the peak in axis > or on the right and so on [Conitzer, 2009], and single-peaked uniform (SP-U) where each preference ranking is uniformly drawn from the set of all possible single-peaked rankings [Walsh, 2015]. The paper describes the generation of synthetic data instances but does not provide access to a public dataset. |
| Dataset Splits | No | The paper generates data instances for empirical evaluation but does not specify training, validation, or test splits. The evaluation seems to be performed directly on the generated instances for different types of allocations and preferences. |
| Hardware Specification | No | The paper does not provide any specific details regarding the hardware used for running experiments (e.g., CPU/GPU models, memory). |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | Yes | We generate 100 SA instances with n = 5 agents and m = 15 projects. We consider three types of preference generation: impartial culture (IC) where each preference ranking is uniformly drawn from the set of all possible preference rankings, single-peaked uniform peak (SP-UP) where each single-peaked preference ranking is generated by first uniformly selecting the peak project then uniformly choosing the next ranked project either on the left of the peak in axis > or on the right and so on [Conitzer, 2009], and single-peaked uniform (SP-U) where each preference ranking is uniformly drawn from the set of all possible single-peaked rankings [Walsh, 2015]. For evaluating the quality of the set of selected projects (selection goal), we compute the satisfaction of an agent by considering the average rank in her preference ranking over all selected projects. The global rank satisfaction is then computed by averaging all individual average rank satisfactions. For evaluating the quality of the allocation (allocation goal), we consider the average over all agents of the rank given to their allocated project. |