Stable and Fair Classification
Authors: Lingxiao Huang, Nisheeth Vishnoi
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We assess the benefits of our approach empirically by extending several fair classification algorithms that are shown to achieve a good balance between fairness and accuracy over the Adult dataset, and show that our framework improves the stability at only a slight sacrifice in accuracy. |
| Researcher Affiliation | Academia | 1EPFL, Switzerland 2Yale University, USA. |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for the open-sourcing of its code. |
| Open Datasets | Yes | Our simulations are over an income dataset Adult (Dheeru & Karra Taniskidou, 2017), that records the demographics of 45222 individuals, along with a binary label indicating whether the income of an individual is greater than 50k USD or not. We use the pre-processed dataset as in (Friedler et al., 2019). |
| Dataset Splits | Yes | We perform 50 repetitions, in which we uniformly sample a training set (75%) from the remaining data. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | For all three algorithms, we set the regularization parameter λ to be 0, 0.01, 0.02, 0.03, 0.04, 0.05 and compute the resulting stability metric stab, average accuracy and average fairness. |