Path-Specific Counterfactual Fairness
Authors: Silvia Chiappa7801-7808
AAAI 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments We evaluate the proposed PSCF-VAE method on the UCI Adult and German Credit datasets. ... In Fig. 5, we show the accuracy obtained by PSCF-VAE on the test set for increasing values of β... In Fig. 6, we show histograms of two dimensions of qφ(Hm|A) |
| Researcher Affiliation | Industry | Silvia Chiappa csilvia@google.com DeepMind London |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any statement or link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We evaluate the proposed PSCF-VAE method on the UCI Adult and German Credit datasets. ... The Adult dataset from the UCI repository (Lichman 2013) |
| Dataset Splits | Yes | The Adult dataset from the UCI repository (Lichman 2013) contains ... for 48,842 individuals; 32,561 and 16,281 for the training and test sets respectively. ... We divided the dataset into training and test sets of sizes 700 and 300 respectively. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running its experiments. |
| Software Dependencies | No | The paper mentions the "Adam optimizer (Kingma and Ba 2015)" and that neural networks were used, but does not provide specific version numbers for software dependencies like Python, deep learning frameworks (e.g., TensorFlow, PyTorch), or other libraries. |
| Experiment Setup | Yes | Training was achieved with the Adam optimizer (Kingma and Ba 2015) with learning rate 0.01, mini-batch size 128, and default values β1 = 0.9, β2 = 0.999, and ϵ = 1e-8. Training was stopped after 20,000 steps. ... As fθ we used a neural network with one linear layer of size 100 with tanh activation, followed by a linear layer... As variational distribution qφ we used a ten-dimensional Gaussian with diagonal covariance, with means and log variances obtained as the outputs of a neural network with two linear layers of size 20 and tanh activation, followed by a linear layer. |