Estimating and Controlling for Equalized Odds via Sensitive Attribute Predictors
Authors: Beepul Bharti, Paul Yi, Jeremias Sulam
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We illustrate these results with experiments on synthetic and real datasets. |
| Researcher Affiliation | Academia | Beepul Bharti Johns Hopkins University bbharti1@jhu.edu Paul Yi University of Maryland pyi@som.umaryland.edu Jeremias Sulam Johns Hopkins University jsulam1@jhu.edu |
| Pseudocode | No | The paper describes a linear program for worst-case fairness violation reduction but does not present it in a pseudocode block or a clearly labeled algorithm section. |
| Open Source Code | Yes | The code and data necessary to reproduce these experiments are available at https://github.com/Sulam-Group/ EOD-with-Proxies. |
| Open Datasets | Yes | FIFA 2020 (Awasthi et al. [5]): The task is to learn a classifier f, using FIFA 2020 player data [29]... [29] Stefano Leone. Fifa 21 complete player dataset, Oct 2020. URL https://www.kaggle.com/datasets/stefanoleone992/fifa-21-complete-player-dataset. ACSPublic Coverage (Ding et al. [15]): ...using the 2018 state census data... Che Xpert (Irvin et al. [24]): Che Xpert is a large public dataset for chest radiograph interpretation... |
| Dataset Splits | No | The paper mentions generating predictions on a 'test dataset' and using a 'bootstrap method to generate 1,000 samples' but does not specify explicit training, validation, or test dataset splits of the original datasets (e.g., 80/10/10 split or predefined splits) for reproducibility of the data partitioning. |
| Hardware Specification | No | The paper describes the use of various models like BERT, Random Forests, and DenseNet121 and parameters for training, but it does not specify any particular hardware (e.g., GPU models, CPU types, or cloud computing instance details) used for the experiments. |
| Software Dependencies | No | The paper mentions using a Bidirectional Encoder Representations from Transformers (BERT) model, an Adam optimizer, Random Forest classifiers, and a Dense Net121 convolutional neural network architecture, but it does not provide specific version numbers for any of the software libraries or frameworks used (e.g., Python, PyTorch, TensorFlow, scikit-learn versions). |
| Experiment Setup | Yes | We use the Adam optimizer [27] with default β-parameters of β1 = 0.9, β2 = 0.999 and a fixed learning rate of 1 10 4. Batches are sampled using a fixed batch size of 16 images and we train for 5 epochs. |