Fairness without Harm: Decoupled Classifiers with Preference Guarantees

Authors: Berk Ustun, Yang Liu, David Parkes

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We validate the effectiveness of the procedure on real-world datasets, showing that it improves accuracy without violating preference guarantees on test data. We present experiments on real-world datasets that show that our procedure can output classifiers with good accuracy and that are responsive to preference guarantees.
Researcher Affiliation Academia Berk Ustun 1 Yang Liu 2 David C. Parkes 1 1Harvard University, Cambridge, MA, USA 2UC Santa Cruz, Santa Cruz, CA, USA.
Pseudocode Yes Algorithm 1 Recursive Decoupling
Open Source Code No We provide software to reproduce our results at . The provided URL is a general project page (decoupled-classifiers.com) and not a direct link to a source-code repository.
Open Datasets Yes The datasets include: adult, the Adult dataset from the UCI ML Repository (Lichman, 2013); arrest and violent, the COMPAS recidivism dataset for arrest and violent crime (Angwin et al., 2016); apnea, a dataset to diagnose obstructive sleep apnea (Ustun et al., 2016); and cancer, a dataset to diagnose lung cancer (National Lung Screening Trial Research Team, 2011).
Dataset Splits Yes We allocate a third of the training data to the pruning procedure, and discard trees that violate rationality or envy-freeness at a significance level of 10%.
Hardware Specification No The paper does not explicitly mention any specific hardware specifications (e.g., GPU/CPU models, memory details) used for running the experiments.
Software Dependencies No The paper mentions 'modern integer programming tools' but does not provide specific software dependencies with version numbers for replication.
Experiment Setup Yes We allocate a third of the training data to the pruning procedure, and discard trees that violate rationality or envy-freeness at a significance level of 10%. The final tree minimizes the worst-case group risk (see Section 2).