Fairness without Demographics through Adversarially Reweighted Learning

Authors: Preethi Lahoti, Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed Chi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our results show that ARL improves Rawlsian Max-Min fairness, with notable AUC improvements for worst-case protected groups in multiple datasets, outperforming state-of-the-art alternatives. ... In Section 4, we evaluate ARL on three real-world datasets. Our results show that ARL yields significant AUC improvements for worst-case protected groups, outperforming state-of-the-art alternatives on all the datasets, and even improves the overall AUC on two of three datasets.
Researcher Affiliation Collaboration Preethi Lahoti plahoti@mpi-inf.mpg.de Max Planck Institute for Informatics Alex Beutel, Jilin Chen, Kang Lee, Flavien Prost, Nithum Thain, Xuezhi Wang, Ed H. Chi Google Research
Pseudocode No The paper includes a computational graph (Figure 2) but no explicit pseudocode or algorithm blocks are provided.
Open Source Code No The paper does not contain any statements or links indicating that open-source code for the methodology is provided.
Open Datasets Yes We now demonstrate the effectiveness of our proposed ARL approach through experiments over three real datasets3 well used in the fairness literature: (i) Adult [45]: income prediction (ii) LSAC [52]: law school admission and (iii) COMPAS [1]: recidivism prediction. ... [45] B. Becker R. Kohavi. 1996. UCI ML Repository. http://archive.ics.uci.edu/ml
Dataset Splits Yes Best hyper-parameter values for all approaches are chosen via grid-search by performing 5-fold cross validation optimizing for best overall AUC.
Hardware Specification No The paper does not specify any particular hardware used for running the experiments (e.g., GPU/CPU models, memory specifications, or cloud resources).
Software Dependencies No The paper describes architectural details such as 'feed-forward network' and 'Re LU activation function', but does not list any specific software libraries with version numbers (e.g., Python, TensorFlow, PyTorch).
Experiment Setup Yes Our model for the learner is a fully connected two layer feed-forward network with 64 and 32 hidden units in the hidden layers, with Re LU activation function. While our adversary is general enough to be a deep network, we observed that for the small academic datasets used in our experiments, a linear adversary performed the best. ... Best hyper-parameter values for all approaches are chosen via grid-search by performing 5-fold cross validation optimizing for best overall AUC.