Causal Context Connects Counterfactual Fairness to Robust Prediction and Group Fairness
Authors: Jacy Anthis, Victor Veitch
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we conduct brief experiments in a semi-synthetic setting with the Adult income dataset [3] to confirm that a counterfactually fair predictor under these conditions achieves out-of-distribution accuracy and the corresponding group fairness metric. |
| Researcher Affiliation | Academia | 1University of Chicago, 2University of California, Berkeley, 3Sentience Institute |
| Pseudocode | No | The paper defines mathematical theorems and a predictor (Theorem 3) but does not present a structured pseudocode block or algorithm. |
| Open Source Code | Yes | code to reproduce these results or produce results with varied inputs (number of datasets sampled, effect of A on X, probabilities of each bias, type of predictor) is available at https://github.com/jacyanthis/Causal-Context. |
| Open Datasets | Yes | We used the Adult income dataset [3] with a simulated protected class A, balanced with P(A = 0) = P(A = 1) = 0.5. [3] refers to 'Barry Becker and Ronny Kohavi. Adult. 1996. DOI: 10 . 24432 / C5XW20.' |
| Dataset Splits | No | The paper mentions using the Adult income dataset and training predictors but does not specify how the dataset was split into training, validation, and test sets (e.g., percentages or sample counts). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU specifications, or cloud computing instance types used for running the experiments. |
| Software Dependencies | No | The paper does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions). |
| Experiment Setup | Yes | On each dataset, we trained three predictors: a naive predictor trained on A and X, a fairness through unawareness (FTU) predictor trained only on X, and a counterfactually fair predictor based on an average of the naive prediction under the assumption that A = 1 and the naive prediction under the assumption A = 0, weighted by the proportion of each group in the target distribution. |