On the Fairness ROAD: Robust Optimization for Adversarial Debiasing

Authors: Vincent Grari, Thibault Laugel, Tatsunori Hashimoto, sylvain lamprier, Marcin Detyniecki

ICLR 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Numerical experiments demonstrate the effectiveness of our method: it achieves, for a given global fairness level, Pareto dominance with respect to local fairness and accuracy across three standard datasets, as well as enhances fairness generalization under distribution shift. ... Experiments, described in Section 4, show the efficacy of the approach on various datasets.
Researcher Affiliation Collaboration 1 AXA Group Operations 2 Stanford University 3 LERIA, Universit e d Angers, France 4 TRAIL, Sorbonne Universit e, Paris, France 5 Polish Academy of Science, IBS PAN, Warsaw, Poland
Pseudocode Yes More details are provided in the appendix (see Alg. 1). ... Algorithm 1 ROAD: Robust Optimization for Adversarial Debiasing ... Algorithm 2 BROAD: Boltzmann Robust Optimization for Adversarial Debiasing
Open Source Code Yes code: https://github.com/axa-rev-research/ROAD-fairness/
Open Datasets Yes For this purpose, we use 3 datasets often used in fair classification, described in Appendix A.8.1: Compas (Angwin et al., 2016), Law (Wightman, 1998) and German Credit (Hofmann, 1994). ... after training classifiers on the training set of the classical Adult dataset (1994), we evaluate the tradeoff between accuracy and global fairness ... on the 2014 and 2015 Folktables datasets (Ding et al., 2021)
Dataset Splits No The paper states that 'Each dataset is split into training and test subsets' but does not specify details for a distinct validation set or its split percentages/counts.
Hardware Specification No The paper does not provide specific details on the hardware (e.g., CPU, GPU models, memory) used for running the experiments.
Software Dependencies No The paper does not specify software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions or library versions).
Experiment Setup Yes In order to obtain the results shown in Figure 2 ad 3, we explore the following hyperparameter values for ROAD and BROAD: λg: grid of 20 values between 0 and 5 τ: grid of 10 values between 0 and 1 The networks fwf , gwg and rwr have the following architecture: fwf : FC:64 R, FC:32 R, FC:1 Sig gwg: FC:64 R, FC:32 R, FC:16 R, FC:1 Sigm hwr: FC:64 R, FC:32 R, FC:1