Learning Fair Naive Bayes Classifiers by Discovering and Eliminating Discrimination Patterns
Authors: YooJung Choi, Golnoosh Farnadi, Behrouz Babaki, Guy Van den Broeck10077-10084
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | An empirical evaluation on three real-world datasets demonstrates that we can remove exponentially many discrimination patterns by only adding a small fraction of them as constraints. |
| Researcher Affiliation | Academia | 1University of California, Los Angeles, 2Mila, 3Universit e de Montr eal, 4Polytechnique Montr eal {yjchoi, guyvdb}@cs.ucla.edu, farnadig@mila.quebec, behrouz.babaki@polymtl.ca |
| Pseudocode | Yes | Algorithm 1 DISC-PATTERNS(x, y, E) |
| Open Source Code | Yes | The processed data, code, and Appendix are available at https: //github.com/UCLA-Star AI/Learn Fair NB. |
| Open Datasets | Yes | We use three datasets: The Adult dataset and German dataset are used for predicting income level and credit risk, respectively, and are obtained from the UCI machine learning repository5; the COMPAS dataset is used for predicting recidivism. |
| Dataset Splits | Yes | Table 4 reports the 10-fold CV accuracy of our method (δ-fair) compared to a max-likelihood naive Bayes model (unconstrained) and two other methods of learning fair classifiers |
| Hardware Specification | Yes | All experiments were run on an AMD Opteron 275 processor (2.2GHz) and 4GB of RAM running Linux Centos 7. |
| Software Dependencies | No | To solve the signomial programs, we use GPkit, which finds local solutions to these problems using a convex optimization solver as its backend.7 Throughout our experiments, Laplace smoothing was used to avoid learning zero probabilities. 7We use Mosek (www.mosek.com) as backend. |
| Experiment Setup | Yes | Throughout our experiments, Laplace smoothing was used to avoid learning zero probabilities. |