One-Network Adversarial Fairness

Authors: Tameem Adel, Isabel Valera, Zoubin Ghahramani, Adrian Weller2412-2420

AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We experiment on two fairness datasets comparing against many earlier approaches to demonstrate state-of-the-art effectiveness of our methods (Section 4). We test our framework on two popular real-world datasets in the fairness literature, the Propublica COMPAS dataset (Larson et al. 2016) and the Adult dataset (Dheeru and Taniskidou 2017).
Researcher Affiliation Collaboration Tameem Adel University of Cambridge, UK tah47@cam.ac.uk Isabel Valera MPI-IS, Germany isabel.valera@tuebingen.mpg.de Zoubin Ghahramani University of Cambridge, UK Uber AI Labs, USA zoubin@eng.cam.ac.uk Adrian Weller University of Cambridge, UK The Alan Turing Institute, UK aw665@cam.ac.uk
Pseudocode No The paper describes the algorithm conceptually and with architectural diagrams (Figure 1, Figure 2) but does not provide structured pseudocode or an algorithm block.
Open Source Code No The paper does not provide any specific links or explicit statements about the availability of open-source code for the described methodology.
Open Datasets Yes We test our framework on two popular real-world datasets in the fairness literature, the Propublica COMPAS dataset (Larson et al. 2016) and the Adult dataset (Dheeru and Taniskidou 2017).
Dataset Splits Yes Each experiment is repeated ten times where, in each run, data is randomly split into three partitions, training, validation (to identify the value of the fairness hyperparameter β) and test. A portion of 60% of the data is reserved for training, 20% for validation and 20% for testing.
Hardware Specification No The paper describes the neural network architecture but does not specify any hardware details (e.g., CPU, GPU models, or memory) used for running the experiments.
Software Dependencies No Adam (Kingma and Ba 2015) is the optimizer used to compute the gradients. For FAD-MD, we use the one-class SVM introduced in (Scholkopf et al. 2000) with ν = 0.5 (fractions of support vectors and outliers). The paper mentions software components but without specific version numbers.
Experiment Setup Yes Values of the fairness hyperparameter β selected by cross-validation are 0.3 and 0.8 for the COMPAS and Adult datasets, respectively. Details of the model architectures are listed in Table 3. Adam (Kingma and Ba 2015) is the optimizer used to compute the gradients. Table 3: Architecture of the neural network used in the introduced models. There are two layers, in addition to the adversarial layer g. FC stands for fully connected. Dataset Architecture COMPAS FC 16 Re LU, FC 32 Re LU, g : FC 16 Re LU, FC output. Adult FC 32 Re LU, FC 32 Re LU, g : FC 16 Re LU, FC output.