Fairness with Overlapping Groups; a Probabilistic Perspective
Authors: Forest Yang, Mouhamadou Cisse, Sanmi Koyejo
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On a variety of real datasets, the proposed approach outperforms baselines in terms of its fairness-performance tradeoff. ... Evaluation. Empirical results are provided to highlight our theoretical claims. |
| Researcher Affiliation | Collaboration | Forest Yang UC Berkeley Moustapha Cisse Google Research Accra Sanmi Koyejo Google Research Accra & Illinois |
| Pseudocode | Yes | Algorithm 1: Group Fair, Group-fair classification with overlapping groups |
| Open Source Code | Yes | Code is available2. 2https://github.com/frstyang/fairness-with-overlapping-groups |
| Open Datasets | Yes | We use the following datasets (details in the appendix): (i) Communities and Crime, (ii) Adult census, (iii) German credit and (iv) Law school. ... [8] Dheeru Dua and Casey Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml. |
| Dataset Splits | No | The paper mentions evaluating on a 'train set and a test set' but does not explicitly specify a validation set or provide details on the split percentages or sizes for training, validation, or test data. |
| Hardware Specification | No | The paper does not provide any specific details about the hardware (e.g., GPU, CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions 'Adam to minimize logistic loss' and 'logistic loss' for the baseline, implying the use of machine learning libraries, but it does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow, scikit-learn versions). |
| Experiment Setup | No | The paper describes the fairness violation and error metric used ('demographic parity' and '0-1 error') and the baseline approach ('linear classifier implemented by using Adam to minimize logistic loss plus the following regularization function'). However, it does not provide concrete hyperparameter values such as learning rates, batch sizes, number of epochs, or specific optimizer settings for the proposed methods (Plugin, Weighted-ERM). |