Certifying Some Distributional Fairness with Subpopulation Decomposition

Authors: Mintong Kang, Linyi Li, Maurice Weber, Yang Liu, Ce Zhang, Bo Li

NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our certified fairness on six real-world datasets and show that our certification is tight in the sensitive shifting scenario and provides non-trivial certification under general shifting. Our framework is flexible to integrate additional non-skewness constraints and we show that it provides even tighter certification under different real-world scenarios. We also compare our certified fairness bound with adapted existing distributional robustness bounds on Gaussian data and demonstrate that our method is significantly tighter.
Researcher Affiliation Academia Mintong Kang UIUC mintong2@illinois.edu Linyi Li UIUC linyi2@illinois.edu Maurice Weber ETH Zurich maurice.weber@inf.ethz.ch Yang Liu UC Santa Cruz yangliu@ucsc.edu Ce Zhang ETH Zurich ce.zhang@inf.ethz.ch Bo Li UIUC lbo@illinois.edu
Pseudocode No The paper does not contain any clearly labeled pseudocode or algorithm blocks.
Open Source Code Yes Code, model, and all experimental data are publicly available at https://github.com/ AI-secure/Certified-Fairness.
Open Datasets Yes We validate our certified fairness on six real-world datasets: Adult [3], Compas [2], Health [19], Lawschool [48], Crime [3], and German [3]. Details on the datasets and data processing steps are provided in Appendix E.1.
Dataset Splits No The paper mentions that 'Details on the datasets and data processing steps are provided in Appendix E.1.' and that training details are in 'Appendix E.3.', but no specific dataset split percentages or sample counts for training, validation, or test sets are provided in the main text.
Hardware Specification No The paper does not provide specific details on the hardware used (e.g., GPU/CPU models, memory) to run the experiments.
Software Dependencies No The paper mentions 'Pytorch' in the references but does not specify version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes Following the standard setup of fairness evaluation in the literature [39, 38, 31, 42], we consider the scenario that the sensitive attributes and labels take binary values. The ReLU network composed of 2 hidden layers of size 20 is used for all datasets.