FACT: A Diagnostic for Group Fairness Trade-offs

Authors: Joon Sik Kim, Jiahao Chen, Ameet Talwalkar

ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental On synthetic and real datasets, we demonstrate the use cases of our diagnostic, particularly on understanding the trade-off landscape between accuracy and fairness. In this section we show how the FACT diagnostic can practically show the relative impact of several notions of fairness on accuracy on synthetic and real datasets.
Researcher Affiliation Collaboration 1Machine Learning Department, Carnegie Mellon University, Pittsburgh, USA 2Work paritally done during an internship at JP Morgan 3JP Morgan AI Research, New York, USA 4Determined AI, San Francisco, USA.
Pseudocode No The paper presents mathematical formulations and optimization problems, but no structured pseudocode or algorithm blocks were found.
Open Source Code Yes Code available: github.com/wnstlr/FACT
Open Datasets Yes We also study the UCI Adult dataset (Dua & Graff, 2017), a census dataset used for income classification tasks where we consider sex as the protected attribute of interest.
Dataset Splits No The paper describes the datasets used (synthetic and UCI Adult) but does not provide specific details on training, validation, or test set splits, such as percentages or sample counts.
Hardware Specification No The paper does not provide specific hardware details (e.g., GPU/CPU models, memory specifications) used for running the experiments.
Software Dependencies No The paper does not explicitly list specific software dependencies, such as library names with version numbers, needed to replicate the experiments. It refers to a GitHub repository where such information might be found, but it is not stated in the paper itself.
Experiment Setup No The paper discusses varying parameters like the strength of fairness conditions (λ) and the resulting trade-offs, but it does not provide specific hyperparameter values (e.g., learning rate, batch size, epochs) or detailed training configurations for the models used in the experiments.