Improving Fair Training under Correlation Shifts

Authors: Yuji Roh, Kangwook Lee, Steven Euijong Whang, Changho Suh

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show that our framework effectively improves existing in-processing fair algorithms w.r.t. accuracy and fairness, both on synthetic and real datasets. We perform experiments to evaluate our framework. We use logistic regression in most experiments and utilize multilayer neural networks in Sec. B.11.
Researcher Affiliation Academia Yuji Roh 1 Kangwook Lee 2 Steven Euijong Whang 1 Changho Suh 1 1Department of Electrical Engineering, KAIST 2Department of Electrical and Computer Engineering, University of Wisconsin-Madison.
Pseudocode Yes Algorithm 1 Fair Training under Correlation Shifts
Open Source Code No The paper does not provide an explicit statement or link for open-source code availability for the methodology described.
Open Datasets Yes Pro Publica COMPAS (Angwin et al., 2016) consists of 5,278 samples, and its labels indicate recidivism; Adult Census (Kohavi, 1996) has 43,131 samples, and its labels indicate a person s annual income.
Dataset Splits No We split the entire data into 4:1 for the training and test datasets. The paper only mentions training and test splits, without explicit validation split percentages or details.
Hardware Specification Yes Our experiments are performed using Py Torch on a Linux server with Intel Xeon Silver 4210R CPUs and NVIDIA Quadro RTX 8000 GPUs.
Software Dependencies No Our experiments are performed using Py Torch on a Linux server... We can solve it using convex optimization solvers (e.g., CVXPY (Diamond & Boyd, 2016)). It mentions PyTorch and CVXPY but does not provide specific version numbers for these software dependencies as used in their experiments.
Experiment Setup Yes The batch sizes of the synthetic, COMPAS, and Adult Census datasets are 100, 200, and 2,000, respectively. We set the learning rate to 0.0005.