Recycling Privileged Learning and Distribution Matching for Fairness
Authors: Novi Quadrianto, Viktoriia Sharmanska
NeurIPS 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 4 Experiments We experiment with two datasets: The Pro Publica COMPAS dataset and the Adult income dataset. ... Table 1: Results on multi-objective optimization which balances two main objectives: performance accuracies and fairness criteria. |
| Researcher Affiliation | Academia | Novi Quadrianto Predictive Analytics Lab (PAL) University of Sussex Brighton, United Kingdom n.quadrianto@sussex.ac.uk Viktoriia Sharmanska Department of Computing Imperial College London London, United Kingdom sharmanska.v@gmail.com Also with National Research University Higher School of Economics, Moscow, Russia. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks (clearly labeled algorithm sections or code-like formatted procedures). |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing their code for the described methodology, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | We experiment with two datasets: The Pro Publica COMPAS dataset and the Adult income dataset. Pro Publica COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) has a total of 5,278 data instances... The Adult dataset has a total of 45, 222 data instances... |
| Dataset Splits | Yes | We use 4, 222 instances for training and 1, 056 instances for test. ... We use 36, 178 instances for training and 9, 044 instances for test. ... The EMO is run on the 60% of the training data, the selection is done on the remaining 40%, and the reported results are on the separate test set based on the model trained on the 60% of the training data. |
| Hardware Specification | No | We thank NVIDIA for GPU donation and Amazon for AWS Cloud Credits. This mentions the general type of hardware used (GPUs, AWS Cloud) but does not provide specific models, configurations, or instance types to enable reproducibility. |
| Software Dependencies | No | The paper mentions using the 'DEAP toolbox [42]' but does not provide specific version numbers for DEAP or any other software dependencies, such as Python, PyTorch, or Scikit-learn, which would be necessary for full reproducibility. |
| Experiment Setup | Yes | For the baseline Zafar et al., as in [5], we set the hyper-parameters τ and µ corresponding to the Penalty Convex Concave procedure to 5.0 and 1.2, respectively. ... When solving DM (and DM+) optimization problems with L-BFGS, the hyper-parameters C, CMMD, σ2, (and γ) are set to 1000., 5000., 10., (and 1.) for both datasets. ... We use 500 individuals in a loop of 50 iterations (generations). |