Bayesian Fairness
Authors: Christos Dimitrakakis, Yang Liu, David C. Parkes, Goran Radanovic509-516
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide experimental results on the COMPAS dataset (Larson et al. 2016) as well as artificial data, showing the robustness of the Bayesian approach, and comparing against methods that define fairness measures according to a single, marginalized model (e.g. (Hardt, Price, and Srebro 2016)). |
| Researcher Affiliation | Academia | Christos Dimitrakakis,1,2 Yang Liu,3 David C. Parkes,4 Goran Radanovic4 1University of Oslo; 2Chalmers; 3University of California, Santa Cruz; 4Harvard |
| Pseudocode | No | To perform this maximization, we use parametrized policies and stochastic gradient descent. In particular, for a finite set X and Y, the policies can be defined in terms of parameters wxa = π(a | x). Then we can perform stochastic gradient descent as detailed in the Supplementary materials, by sampling θ β, and calculating the gradient for each sampled θ. (Details are mentioned to be in supplementary materials, not provided as a pseudocode block in the main text.) |
| Open Source Code | No | The paper mentions that 'All missing proofs and details can be found in our supplementary materials.' and algorithm details are 'detailed in the Supplementary materials', but it does not explicitly state that the source code for their methodology is released or provide a link to it. |
| Open Datasets | Yes | We provide experimental results on the COMPAS dataset (Larson et al. 2016) as well as artificial data... Larson, J.; Mattu, S.; Kirchner, L.; and Angwin, J. 2016. Propublica COMPAS git-hub repository. https://github.com/ propublica/compas-analysis/. |
| Dataset Splits | Yes | For the COMPAS dataset... We used the first 6000 observations for training and the remaining 1214 observations for validation. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory) used for running its experiments. |
| Software Dependencies | No | The paper does not provide specific ancillary software details with version numbers (e.g., Python 3.8, PyTorch 1.9). |
| Experiment Setup | Yes | In all cases where a Dirichlet prior was used, the Dirichlet prior parameters were set equal to 1/2. Here we consider a discrete decision problem, with |X| = 8, |Y| = |Z| = |A| = 2, and u(y, a) = I {y = a}. We used the first 6000 observations for training and the remaining 1214 observations for validation. |