Conditional Learning of Fair Representations

Authors: Han Zhao, Amanda Coston, Tameem Adel, Geoffrey J. Gordon

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate both in theory and on two real-world experiments that the proposed algorithm leads to a better utility-fairness trade-off on balanced datasets compared with existing algorithms on learning fair representations for classification. and 4 EMPIRICAL STUDIES In light of our theoretic findings, in this section we verify the effectiveness of the proposed algorithm in simultaneously ensuring equalized odds and accuracy parity using real-world datasets.
Researcher Affiliation Collaboration Han Zhao & Amanda Coston Machine Learning Department Carnegie Mellon University han.zhao@cs.cmu.edu acoston@andrew.cmu.edu Tameem Adel Department of Engineering University of Cambridge tah47@cam.ac.uk Geoffrey J. Gordon Microsoft Research, Montreal Machine Learning Department Carnegie Mellon University geoff.gordon@microsoft.com
Pseudocode No The paper does not contain structured pseudocode or algorithm blocks.
Open Source Code No The paper does not include an unambiguous statement that the authors are releasing the code for the work described, nor does it provide a direct link to a source-code repository.
Open Datasets Yes To this end, we perform experiments on two popular real-world datasets in the literature of algorithmic fairness, including an income-prediction dataset, known as the Adult dataset, from the UCI Machine Learning Repository (Dua & Graff, 2017), and the Propublica COMPAS dataset (Dieterich et al., 2016).
Dataset Splits Yes Train / Test D0(Y = 1) D1(Y = 1) BR(D0, D1) D(Y = 1) D(A = 1) Adult 30, 162/15, 060 0.310 0.113 0.196 0.246 0.673 COMPAS 4, 320/1, 852 0.400 0.529 0.129 0.467 0.514
Hardware Specification No The paper does not provide specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running its experiments.
Software Dependencies No The paper does not provide specific ancillary software details, such as library names with version numbers, needed to replicate the experiment.
Experiment Setup Yes The hyperparameters used in the experiment are listed in Table 2. Optimization Algorithm Ada Delta Learning Rate 1.0 Batch Size 512 Training Epochs λ {0.1, 1.0, 10.0, 100.0, 1000.0} 100 and The hyperparameters used in the experiment are listed in Table 3. Optimization Algorithm Ada Delta Learning Rate 1.0 Batch Size 512 Training Epochs λ {0.1, 1.0} 20 Training Epochs λ = 10.0 15