Conformalized Fairness via Quantile Regression
Authors: Meichen Liu, Lei Ding, Dengdeng Yu, Wulong Liu, Linglong Kong, Bei Jiang
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We demonstrate the superior empirical performance of this approach on several benchmark datasets. |
| Researcher Affiliation | Collaboration | 1Department of Mathematical and Statistical Sciences, University of Alberta 2Department of Mathematics, University of Texas at Arlington 3 Huawei Noah s Ark Lab Canada |
| Pseudocode | Yes | We present the pseudo-codes of CFQP as well as the construction of ˆgα for Eq. 9 in Algorithm 1, 2 respectively. |
| Open Source Code | Yes | The code for reproducing our results is avaiable at https: //github.com/Lei-Ding07/Conformal_Quantile_Fairness. |
| Open Datasets | Yes | We report the performance of post-processing fairness adjustment on quantiles through four benchmark datasets: Law School (LAW), Community&Crime (CRIME), MEPS 2016 (MEPS), Government Salary (GOV). |
| Dataset Splits | Yes | We split the training data into proper training and calibration sets of equal sizes. |
| Hardware Specification | No | The ethics checklist explicitly states 'N/A' for hardware specifications: 'Did you include the total amount of compute and the type of resources used (e.g., type of GPUs, internal cluster, or cloud provider)? [N/A]' |
| Software Dependencies | No | No specific software dependencies with version numbers are mentioned in the provided text. |
| Experiment Setup | No | While the nominal miscoverage rate α is mentioned as 0.1, other specific hyperparameters like learning rates, batch sizes, or optimizer settings for the linear, random forest, and neural network models are not detailed in the provided text. |