Fairness in Contextual Resource Allocation Systems: Metrics and Incompatibility Results

Authors: Nathanael Jo, Bill Tang, Kathryn Dullerud, Sina Aghaei, Eric Rice, Phebe Vayanos

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical Our work culminates with a set of incompatibility results that investigate the interplay between the different fairness metrics we propose. Notably, we demonstrate that: 1) fairness in allocation and fairness in outcomes are usually incompatible; 2) policies that prioritize based on a vulnerability score will usually result in unequal outcomes across groups, even if the score is perfectly calibrated; 3) policies using contextual information beyond what is needed to characterize baseline risk and treatment effects can be fairer in their outcomes than those using just baseline risk and treatment effects; and 4) policies using group status in addition to baseline risk and treatment effects are as fair as possible given all available information. We present incompatibility results showing that 1) fairness in allocation and fairness in outcomes are usually incompatible; 2) policies that prioritize based on a vulnerability score will usually result in unequal outcomes across groups, even if the score is perfectly calibrated; 3) policies using contextual information beyond what is needed to characterize baseline risk and treatment effects can be fairer in their outcomes than those using just baseline risk and treatment effects; and 4) policies using group status in addition to baseline risk and treatment effects are as fair as possible given all available information.
Researcher Affiliation Academia Nathanael Jo*, Bill Tang*, Kathryn Dullerud, Sina Aghaei, Eric Rice, Phebe Vayanos USC Center for AI in Society, Los Angeles, CA nathanael.jo@gmail.com, {yongpeng, kdulleru, saghaei, ericr, phebe.vayanos}@usc.edu
Pseudocode No The paper is theoretical, presenting definitions and mathematical propositions. It does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any explicit statements about releasing source code for the methodology described, nor does it include links to any code repositories.
Open Datasets No The paper does not describe any experiments that involve training a model on a specific dataset. It proposes a theoretical framework and incompatibility results, using a real-world scenario (homelessness in LA) as a motivating example but not as a dataset for empirical evaluation within the paper.
Dataset Splits No The paper does not describe any experiments involving model training or validation. It presents a theoretical framework and incompatibility results.
Hardware Specification No The paper is theoretical and does not involve computational experiments that would require specific hardware. Therefore, no hardware specifications are mentioned.
Software Dependencies No The paper is theoretical and does not describe any software implementation or dependencies with version numbers.
Experiment Setup No The paper is theoretical and does not involve experimental setups, hyperparameters, or training configurations for models.