Robust Optimization for Fairness with Noisy Protected Groups
Authors: Serena Wang, Wenshuo Guo, Harikrishna Narasimhan, Andrew Cotter, Maya Gupta, Michael Jordan
NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Using two case studies, we show empirically that the robust approaches achieve better true group fairness guarantees than the naïve approach. and We compare the performance of the naïve approach and the two robust optimization approaches (DRO and soft group assignments) empirically using two datasets from UCI [18] with different constraints. |
| Researcher Affiliation | Collaboration | Serena Wang UC Berkeley Google Research serenalwang@berkeley.edu Wenshuo Guo UC Berkeley wsguo@berkeley.edu Harikrishna Narasimhan Google Research hnarasimhan@google.com Andrew Cotter Google Research acotter@google.com Maya Gupta Google Research mayagupta@google.com Michael I. Jordan UC Berkeley jordan@berkeley.edu |
| Pseudocode | Yes | Algorithm 1 Ideal Algorithm |
| Open Source Code | Yes | All experiment code is available on Git Hub at https://github.com/wenshuoguo/ robust-fairness-code. |
| Open Datasets | Yes | We compare the performance of the naïve approach and the two robust optimization approaches (DRO and soft group assignments) empirically using two datasets from UCI [18] with different constraints. and We use the Adult dataset from UCI [18]... We use the default of credit card clients dataset from UCI [18] |
| Dataset Splits | Yes | mean and standard error over 10 train/val/test splits and The data is split into 10 folds, and for each fold, one fold is used for validation and one for test, and the remaining 8 for training. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU/CPU models, memory, or cloud instance types. |
| Software Dependencies | No | The paper mentions that experiment code is available on GitHub but does not explicitly list specific software dependencies with version numbers (e.g., Python, PyTorch, or other libraries and their versions) required to replicate the experiment. |
| Experiment Setup | Yes | All models are trained for 500 epochs with a batch size of 128 using the Adam optimizer with a learning rate of 0.01. and The specific constraint violations measured and additional training details can be found in Appendix F.1. |