Fair Mixup: Fairness via Interpolation
Authors: Ching-Yao Chuang, Youssef Mroueh
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We analyze fair mixup and empirically show that it ensures a better generalization for both accuracy and fairness measurement in tabular, vision, and language benchmarks. |
| Researcher Affiliation | Collaboration | Ching-Yao Chuang CSAIL, MIT cychuang@mit.edu Youssef Mroueh IBM Research AI mroueh@us.ibm.com |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | Yes | The code is available at https://github.com/chingyaoc/fair-mixup. |
| Open Datasets | Yes | UCI Adult dataset (Dua & Graff, 2017) ... Celeb A face attributes dataset (Liu et al.)... Jigsaw toxic comment dataset (Jigsaw, 2018). |
| Dataset Splits | Yes | the dataset is randomly randomly split into a training, validation, and testing set with partition 60%, 20%, and 20%, respectively. |
| Hardware Specification | No | The paper does not specify the hardware (e.g., GPU/CPU models) used for running the experiments. |
| Software Dependencies | No | The paper mentions the Adam optimizer and BERT embeddings but does not provide specific version numbers for software dependencies like PyTorch, TensorFlow, or specific library versions. |
| Experiment Setup | Yes | The models are two-layer Re LU networks with hidden size 200. We only evaluate input mixup for Adult dataset as the network is not deep enough to produce meaningful latent representations. The models are optimized with Adam optimizer (Kingma & Ba, 2014) with learning rate 1e-3. We retrain each model 10 times and report the mean accuracy and fairness measurement. |