Causal Conceptions of Fairness and their Consequences

Authors: Hamed Nilforoshan, Johann D Gaebler, Ravi Shroff, Sharad Goel

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We then show, analytically and empirically, that both families of definitions almost always in a measure theoretic sense result in strongly Pareto dominated decision policies
Researcher Affiliation Academia 1Stanford University, Stanford, CA 2New York University, New York, NY 3Harvard University, Cambridge, MA.
Pseudocode Yes Algorithm 1: Path-specific counterfactuals
Open Source Code Yes Code to reproduce our results is available at https://github. com/stanford-policylab/causal-fairness.
Open Datasets No In the hypothetical pool of 100,000 applicants we consider, applicants in the target race group a1 have, on average, fewer educational opportunities than those applicants in group a0, which leads to lower average academic preparedness, as well as lower average test scores. This describes a synthetically generated dataset and does not indicate a publicly available one.
Dataset Splits No The paper uses a 'hypothetical pool of 100,000 applicants' for its empirical example, but does not specify any training, validation, or test dataset splits.
Hardware Specification No The paper discusses analytical proofs and an empirical example using a hypothetical dataset, but it does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used for computations or experiments.
Software Dependencies No The paper mentions that code is available on GitHub but does not explicitly list any specific software dependencies with version numbers (e.g., Python, PyTorch, CUDA versions).
Experiment Setup Yes In our example, we use constants µA = 1/3, βE,0 = 1, βE,A = 1 βM,0 = 0, βM,E = 1, βT,0 = 50, βT,E = 4, βT,M = 4, βT,u = 7, βT,B = 1, βY,0 = 1/2, βY,D = 1/2. We also assume a budget b = 1/4.