Popularizing Fairness: Group Fairness and Individual Welfare
Authors: Andrew Estornell, Sanmay Das, Brendan Juba, Yevgeniy Vorobeychik
AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Through extensive experiments, we demonstrate that the proposed postprocessing approaches are highly effective. ... In this section we empirically investigate the relationship between popularity and fairness, and evaluate the efficacy of the proposed postprocessing algorithms. Each experiment is conducted on four data sets: 1) the Recidivism dataset, 2) the Income dataset, 3) the Community Crime dataset, and 4) the Law School dataset. |
| Researcher Affiliation | Academia | Andrew Estornell1, Sanmay Das2, Brendan Juba1, Yevgeniy Vorobeychik1 1 Washington University in Saint Louis 2 George Mason University |
| Pseudocode | Yes | Algorithm 1: (Randomized DOS) Postprocessing technique for converting a β-fair model f F into a γ-popular β-fair model f P. |
| Open Source Code | No | The paper does not contain an explicit statement or a link to open-source code for the described methodology. |
| Open Datasets | Yes | Each experiment is conducted on four data sets: 1) the Recidivism dataset, 2) the Income dataset, 3) the Community Crime dataset, and 4) the Law School dataset. |
| Dataset Splits | No | The paper mentions "3-fold average" for test data but does not explicitly provide percentages or counts for training, validation, or test splits, nor does it refer to predefined standard splits for validation. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware used for running the experiments (e.g., GPU models, CPU types, memory). |
| Software Dependencies | No | The paper does not specify any software dependencies or their version numbers. |
| Experiment Setup | No | The paper mentions using Logistic Regression as a classifier and Reductions method for fairness but does not specify hyperparameters like learning rates, batch sizes, or other training configurations. |