Achieving a Fairer Future by Changing the Past

Authors: Jiafan He, Ariel D. Procaccia, Alexandros Psomas, David Zeng

IJCAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical For the case of two agents, we show that algorithms that are informed about the values of future items can get by without any adjustments, whereas uninformed algorithms require Θ(T) adjustments. For the general case of three or more agents, we prove that even informed algorithms must use Ω(T) adjustments, and design an uninformed algorithm that requires only O(T 3/2).
Researcher Affiliation Academia Jiafan He1 , Ariel D. Procaccia2 , Alexandros Psomas2 and David Zeng2 1Institute for Interdisciplinary Information Sciences, Tsinghua University 2Computer Science Department, Carnegie Mellon University
Pseudocode Yes Algorithm 1 Fractional Item Rounding input: v1, v2; Algorithm 2 Envy Balancing input: v1, v2; Algorithm 3 Double Round Robin input: vi for each agent ai
Open Source Code No The paper does not provide a link to or explicitly state the release of open-source code for the methodology described. Footnote 1 refers to the full version of the paper itself, not source code.
Open Datasets No The paper is theoretical and does not involve empirical experiments using datasets, so there is no information regarding public datasets or training data.
Dataset Splits No The paper is theoretical and does not involve empirical experiments, so there is no information regarding training/test/validation dataset splits.
Hardware Specification No The paper is theoretical and does not involve empirical experiments, therefore no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not involve empirical experiments, therefore no software dependencies with version numbers are mentioned.
Experiment Setup No The paper is theoretical and does not describe empirical experiments; therefore, it does not include details on experimental setup like hyperparameters or training settings.