When Worlds Collide: Integrating Different Counterfactual Assumptions in Fairness
Authors: Chris Russell, Matt J. Kusner, Joshua Loftus, Ricardo Silva
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the flexibility of our method on two real-world fair classification problems: 1. fair predictions of student performance in law schools; and 2. predicting whether criminals will re-offend upon being released. For each dataset we begin by giving details of the fair prediction problem. We then introduce multiple causal models that each possibly describe how unfairness plays a role in the data. Finally, we give results of Multi-World Fairness (MWF) and show how it changes for different settings of the fairness parameters (ϵ, δ). We split the law school data into a random 80/20 train/test split and we fit casual models and classifiers on the training set and evaluate performance on the test set. |
| Researcher Affiliation | Academia | Chris Russell The Alan Turing Institute and University of Surrey crussell@turing.ac.uk Matt J. Kusner The Alan Turing Institute and University of Warwick mkusner@turing.ac.uk Joshua R. Loftus New York University loftus@nyu.edu Ricardo Silva The Alan Turing Institute and University College London ricardo@stats.ucl.ac.uk |
| Pseudocode | Yes | Algorithm 1 Multi-World Fairness |
| Open Source Code | No | The paper does not provide any explicit statement or link regarding the availability of its source code. |
| Open Datasets | Yes | We begin by investigating a dataset of survey results across 163 U.S. law schools conducted by the Law School Admission Council [19] and Pro Publica [13] released data on prisoners in Broward County, Florida who were awaiting a sentencing hearing. |
| Dataset Splits | No | The paper mentions an “80/20 train/test split” for both datasets but does not specify a validation split. |
| Hardware Specification | No | The paper does not provide any specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies, such as programming language versions or library version numbers (e.g., Python, TensorFlow, PyTorch versions). |
| Experiment Setup | No | The paper mentions using linear classifiers and a 3-layer neural network and describes how λ values are selected through grid search. However, it does not provide specific hyperparameter values like learning rates, batch sizes, number of epochs, or optimizer settings for the models used in the experiments. |