Achieving Equalized Odds by Resampling Sensitive Attributes

Authors: Yaniv Romano, Stephen Bates, Emmanuel Candes

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate the applicability and validity of the proposed framework both in regression and multi-class classification problems, reporting improved performance over state-of-the-art methods.
Researcher Affiliation Academia Yaniv Romano Department of Statistics Stanford University Stanford, CA, USA yromano@stanford.edu Stephen Bates Department of Statistics Stanford University Stanford, CA, USA stephenbates@stanford.edu Emmanuel J. Candès Departments of Mathematics and of Statistics Stanford University Stanford, CA, USA candes@stanford.edu
Pseudocode Yes Algorithm 1 Fair Dummies Model Fitting
Open Source Code Yes The software is available online at https://github.com/yromano/fair_dummies.
Open Datasets Yes We begin with experiments on two data sets with real-valued responses: the 2016 Medical Expenditure Panel Survey (MEPS), where we seek to predict medical usage based on demographic variables, and the widely used UCI Communities and Crime data set, where we seek to predict violent crime levels from census and police data. See Supplementary Section S5.1 for more details.
Dataset Splits Yes In all experiments, we randomly split the data into a training set (60%), a hold-out set (20%) to fit the test statistic for the fair-dummies test, and a test set (20%) to evaluate their performance.
Hardware Specification No The paper does not specify any particular hardware details (e.g., CPU, GPU models, or cloud computing resources) used to run the experiments.
Software Dependencies No The paper does not provide specific ancillary software details with version numbers (e.g., library names like PyTorch or TensorFlow with their respective versions) needed to replicate the experiments.
Experiment Setup No The paper states: 'Therefore, we choose to tune the set of parameters of each method only once and treat the chosen set as fixed in future experiments; see Supplementary Section S6.1 for a full description of the tuning of each method.' This indicates that specific experimental setup details, such as hyperparameters, are deferred to the supplementary material and are not explicitly provided in the main text.