Dynamic Fair Division with Partial Information

Authors: Gerdus Benade, Daniel Halpern, Alexandros Psomas

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical We give an algorithm that is envy-free and (1 ϵ)-welfare-maximizing with high probability. We provide similar guarantees (envy-freeness and a constant approximation to welfare with high probability) even with minimally expressive queries that ask for a comparison to a single previous item. For independent but non-identical agents, we obtain envy-freeness and a constant approximation to Pareto efficiency with high probability. We prove that all our results are asymptotically tight.
Researcher Affiliation Academia Gerdus Benadè Boston University benade@bu.edu Daniel Halpern Harvard University dhalpern@g.harvard.edu Alexandros Psomas Purdue University apsomas@cs.perdue.edu
Pseudocode Yes Algorithm 1: EF + (1 ε)-Welfare for epoch k = 1 . . . do Sampling Phase: (n k4 items) Give the j-th item in this phase to agent j(mod n). Ranking Phase: (k8 items) for each item g in this phase do Elicit σ 1 i (Gk i {g}, g). Allocate g to an agent j arg mini N σ 1 i (Gk i {g}, g).
Open Source Code No The paper does not provide concrete access to source code (specific repository link, explicit code release statement, or code in supplementary materials) for the methodology described in this paper.
Open Datasets No The paper is theoretical and focuses on algorithm design and proofs rather than empirical evaluation on datasets. Therefore, it does not mention the use of a specific publicly available dataset.
Dataset Splits No The paper is theoretical and does not describe empirical experiments with dataset splits. Therefore, no information on training, validation, or test splits is provided.
Hardware Specification No The paper is theoretical and focuses on algorithm design and proofs, not empirical experiments. Therefore, no hardware specifications for running experiments are mentioned.
Software Dependencies No The paper is theoretical and does not describe implemented experiments that would require specific software dependencies with version numbers.
Experiment Setup No The paper is theoretical and focuses on algorithm design and proofs. It does not describe an experimental setup with specific hyperparameters or training configurations.