Fairness Towards Groups of Agents in the Allocation of Indivisible Items

Authors: Nawal Benabbou, Mithun Chakraborty, Edith Elkind, Yair Zick

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We show experimentally that the classic algorithm of Lipton et al. [2004] equipped with a simple heuristic can produce TEF1 allocations with significantly reduced waste... We experimentally compared procedures L and H using the percentage of items wasted as our performance metric. We simulated two sets of problem instances...
Researcher Affiliation Academia Sorbonne Universit e, CNRS, Laboratoire d Informatique de Paris 6, LIP6 F-75005 Paris, France; Department of Computer Science, National University of Singapore, Singapore; Department of Computer Science, University of Oxford, United Kingdom
Pseudocode Yes Algorithm 1: PMURR({Np}p [k], M, (u(i, j))i N,j M)
Open Source Code No No explicit statement or link providing concrete access to source code for the methodology described in this paper was found.
Open Datasets No The paper describes generating synthetic data for simulations: 'We simulated two sets of problem instances... For each agent, we sampled m numbers uniformly at random from [0, 1] and normalized them to generate utilities for all m items.' However, it does not provide concrete access information (link, DOI, specific repository, or formal citation) for a publicly available dataset.
Dataset Splits No The paper describes a simulation setup ('We simulated two sets of problem instances... We report results averaged over 100 runs each.') rather than explicit train/validation/test dataset splits. There is no mention of specific percentages, sample counts, or predefined splits for reproducibility.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running experiments were provided.
Software Dependencies No No specific software dependencies or versions (e.g., library names with version numbers) were mentioned for replication.
Experiment Setup No The paper describes the parameters for simulating problem instances (e.g., 'n = 100 agents partitioned into k = 3 types', 'm {50, 100} items', 'sampled m numbers uniformly at random from [0, 1] and normalized them to generate utilities'). However, it does not provide specific hyperparameter values or detailed system-level training settings as typically found in experimental setups for models.