Causal Fairness under Unobserved Confounding: A Neural Sensitivity Framework

Authors: Maresa Schröder, Dennis Frauen, Stefan Feuerriegel

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our contributions are three-fold. First, we derive bounds for causal fairness metrics under different sources of unobserved confounding. This enables practitioners to examine the sensitivity of their machine learning models to unobserved confounding in fairness-critical applications. Second, we propose a novel neural framework for learning fair predictions, which allows us to offer worst-case guarantees of the extent to which causal fairness can be violated due to unobserved confounding. Third, we demonstrate the effectiveness of our framework in a series of experiments, including a real-world case study about predicting prison sentences.
Researcher Affiliation Academia Maresa Schr oder, Dennis Frauen & Stefan Feuerriegel Munich Center for Machine Learning LMU Munich {maresa.schroder,frauen,feuerriegel}@lmu.de
Pseudocode Yes Algorithm 1: Training fair prediction models robust to unobserved confounding
Open Source Code Yes Both data and code for our framework are available in our Git Hub repository.
Open Datasets Yes Our real-world study is based on the US Survey of Prison Inmates (United States. Bureau of Justice Statistics, 2021). We aim to predict the prison sentence length for drug offenders. For this, we build upon the causal graph from Fig. 1.
Dataset Splits Yes We generate multiple datasets per setting with different confounding levels, which we split into train/val/test sets (60/20/20%). Details are in Supplement H.1.
Hardware Specification No The paper mentions implementing experiments using Py Torch Lightning and training neural networks, but it does not specify any particular hardware components such as CPU models, GPU models, or memory.
Software Dependencies No The paper mentions 'Py Torch Lightning' and 'Adam' as well as 'Optuna' but does not specify their version numbers.
Experiment Setup Yes The models consisted of one hidden layer of size ten and a dropout layer with a rate of 0.1 and were trained with a batch size of 128 and an initial learning rate of 0.0001. We trained the fair na ıve model and the fair robust model with a fairness constraint of γ = 0.02, initial Lagrangian parameters λ = 0.1, µ = 0.02, and an update rate of α = 1.5.