Fair Division Through Information Withholding

Authors: Hadi Hosseini, Sujoy Sikdar, Rohit Vaish, Hejun Wang, Lirong Xia2014-2021

AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that HEF-k allocations with a small k often exist, even when envy-free allocations do not (Figure 1). We also compare several known algorithms for computing EF1 allocations on synthetic and real-world preference data, and find that the round-robin algorithm and a recent algorithm of Barman, Krishnamurthy, and Vaish (2018) withhold close-to-optimal amount of information, often hiding no more than three goods (Section 5).
Researcher Affiliation Academia Hadi Hosseini,1 Rochester Institute of Technology, 2Washington University St. Louis, 3Rensselaer Polytechnic Institute hho@cs.rit.edu, sujoy@wustl.edu, {vaishr2, wangj38}@rpi.edu, xial@cs.rpi.edu
Pseudocode Yes ALGORITHM 1: Greedy Approximation Algorithm for HEF-k-VERIFICATION
Open Source Code No No explicit statement or link providing access to the source code for the methodology described in this paper.
Open Datasets Yes For experiments with real-world data, we use the data from the popular fair division website Spliddit (Goldman and Procaccia 2014).
Dataset Splits No The paper describes generating synthetic data and using real-world data, but does not provide specific training/validation/test dataset splits (e.g., percentages or sample counts) for reproducibility.
Hardware Specification No No specific hardware details (e.g., GPU/CPU models, memory amounts) are provided for the experimental setup. The paper only mentions running experiments.
Software Dependencies No No specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9) are provided for reproducibility of the experiments.
Experiment Setup No The paper describes the setup for generating synthetic data (e.g., varying n and m, Bernoulli distribution parameter) and the metrics used (regret, hidden goods), but it does not provide specific experimental setup details like hyperparameters or training configurations for the algorithms themselves.