Fairness without Demographics through Shared Latent Space-Based Debiasing
Authors: Rashidul Islam, Huiyuan Chen, Yiwei Cai
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experiments on benchmark datasets demonstrate that our methods consistently outperform existing state-of-the-art models in standard group fairness metrics. Experimental Results We conduct a comprehensive evaluation of our SLSD and R-SLSD on three benchmark datasets |
| Researcher Affiliation | Industry | Rashidul Islam, Huiyuan Chen, Yiwei Cai Visa Research, USA {raislam, hchen, yicai}@visa.com |
| Pseudocode | No | The paper contains a computational graph (Figure 1) but no explicit pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-source code availability. |
| Open Datasets | Yes | We conduct a comprehensive evaluation of our SLSD and R-SLSD on three benchmark datasets3: 1) Adult (Becker and Kohavi 1996): income prediction, 2) ACSIncome (Ding et al. 2021): another variant of income prediction and 3) Default (Yeh 2016): credit card default prediction 4. |
| Dataset Splits | Yes | Each dataset is randomly split into 70% training and 30% test sets. Hyper-parameter tuning, including learning rate, mini-batch size, and the fairness tuning parameter λ (from Equation 10), is conducted on the training set. Best hyper-parameter values for all approaches are chosen via grid-search by performing 5-fold cross-validation optimizing for the best overall balanced accuracy. |
| Hardware Specification | No | The paper does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running experiments. |
| Software Dependencies | No | The paper does not specify software dependencies with version numbers. |
| Experiment Setup | Yes | Hyper-parameter tuning, including learning rate, mini-batch size, and the fairness tuning parameter λ (from Equation 10), is conducted on the training set. ... The architecture for the source encoder Eϑ, target encoder Eφ, classifier MΘ and adversary DΦ are fully connected three layer feed-forward networks 256 128 64, with Re LU activations. |