Post-processing for Individual Fairness
Authors: Felix Petersen, Debarghya Mukherjee, Yuekai Sun, Mikhail Yurochkin
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirically, our post-processing algorithms correct individual biases in large-scale NLP models such as BERT, while preserving accuracy. |
| Researcher Affiliation | Collaboration | Felix Petersen Debarghya Mukherjee University of Konstanz University of Michigan felix.petersen@uni.kn mdeb@umich.edu Yuekai Sun Mikhail Yurochkin University of Michigan IBM Research, MIT-IBM Watson AI Lab yuekai@umich.edu mikhail.yurochkin@ibm.com |
| Pseudocode | No | The paper describes algorithmic steps and solutions in text and equations but does not include a structured pseudocode or algorithm block. |
| Open Source Code | Yes | The implementation of this work is available at github.com/Felix-Petersen/fairness-post-processing. |
| Open Datasets | Yes | We replicate the experiments of Yurochkin et al. [12] on Bios [40] and Toxicity1 data sets. |
| Dataset Splits | No | The paper mentions 'validation data' for hyperparameter selection but does not provide specific details on the dataset splits (e.g., percentages or counts) for training, validation, and testing. |
| Hardware Specification | No | The paper mentions general computational resources but does not specify the hardware used for running the experiments (e.g., exact GPU/CPU models, memory, or cloud instances). |
| Software Dependencies | No | The paper mentions software like CVXPY and GloVe embeddings but does not provide specific version numbers for any software dependencies used in the experiments. |
| Experiment Setup | Yes | We evaluate the fairness-accuracy trade-offs for a range of threshold parameters τ (for GLIF and GLIF-NRW) and for a range of Lipschitz-constants L (for IF-constraints) in Figure 2. |