Characterizing the risk of fairwashing

Authors: Ulrich Aïvodji, Hiromi Arai, Sébastien Gambs, Satoshi Hara

NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In Section 3, we present the results obtained from our study for a diverse set of datasets, black-box models, explanation models and fairness metrics. We start by replicating the results of our previous study [4], which demonstrate that explanations can be fairwashed. Then, we evaluate two fundamental properties of fairwashing, namely generalization and transferability2.
Researcher Affiliation Academia Ulrich Aïvodji École de Technologie Supérieure ulrich.aivodji@etsmtl.ca Hiromi Arai RIKEN Center for Advanced Intelligence Project hiromi.arai@riken.jp Sébastien Gambs Université du Québec à Montréal gambs.sebastien@uqam.ca Satoshi Hara Osaka University satohara@ar.sanken.osaka.ac.jp
Pseudocode No The paper mentions 'Laundry ML, an algorithm' as a basis for their study, and describes optimization problems, but it does not provide pseudocode or a clearly labeled algorithm block.
Open Source Code Yes Our implementations are available at https://github.com/aivodji/characterizing_fairwashing
Open Datasets Yes We have investigated four real-world datasets commonly used in the fairness literature, namely Adult Income, Marketing, COMPAS, and Default Credit. [21] D. Dua and C. Graff. UCI machine learning repository. 2017. URL http://archive.ics.uci.edu/ml.
Dataset Splits Yes Before the experiments, each dataset is split into three subsets, namely the training set (67%), the suing group (16.5%) and the test set (16.5%).
Hardware Specification No The paper does not provide specific hardware details such as GPU/CPU models, processors, or memory used for running the experiments. It mentions software and datasets, but not hardware specifications.
Software Dependencies No The paper mentions software like Fairlearn and Hyper Opt, and references GitHub repositories for implementations, but it does not provide specific version numbers for these software dependencies, which are required for reproducibility.
Experiment Setup Yes Given a suing group Xsg, for each black-box model b and each fairness metric m, the Pareto fronts are obtained by sweeping over 300 values of fairness constraints m 2 [0, 1]. We performed a hyperparameter search with 25 iterations using Hyper Opt [10]. We evaluated the results on a number of unfairness constraints ( 2 {0.03, 0.05, 0.1}) to simulate both strong and loose fairness constraints.