FR-Train: A Mutual Information-Based Approach to Fair and Robust Training
Authors: Yuji Roh, Kangwook Lee, Steven Whang, Changho Suh
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide experimental results for FR-Train. For the fairness measure, we use disparate impact, while leaving in the supplementary the results for equalized odds and equal opportunity. |
| Researcher Affiliation | Academia | 1School of Electrical Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Korea 2Department of Electrical and Computer Engineering, University of Wisconsin Madison, Madison, Wisconsin, USA. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | 1https://github.com/yuji-roh/fr-train |
| Open Datasets | Yes | We use two real datasets: Pro Publica COMPAS (Angwin et al., 2016) and Adult Census (Kohavi, 1996), which have 7,214 and 45,222 examples, respectively. |
| Dataset Splits | Yes | To make a validation set, we randomly select clean examples that amount to 10% of the entire training data. For FR-Train and RML, the validation set is 10% of Dtr. We consider a scenario where one first constructs a small (which amounts to 5% of Dtr) validation set based on crowdsourcing |
| Hardware Specification | Yes | We use Py Torch (Paszke et al., 2017), and all experiments are performed on a server with Intel i7-6850 CPUs. |
| Software Dependencies | No | The paper mentions 'Py Torch (Paszke et al., 2017)' but does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | Here λ1 and λ2 are tuning knobs that play roles to emphasize fair and robust training, respectively. We compute the final example weights as W = R + D(X, Z, ˆY ) (1 R) where R = σ( Lc /Ld C) is a conversion of the loss ratio into a probability using the sigmoid function σ and hyperparameter C. |