Learning Optimal Fair Policies

Authors: Razieh Nabi, Daniel Malinsky, Ilya Shpitser

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We illustrate our approach with both synthetic data and real criminal justice data. ... We illustrate our proposal via experiments on synthetic and real data.
Researcher Affiliation Academia 1Department of Computer Science, Johns Hopkins University, Baltimore, MD, USA.
Pseudocode No The paper does not contain any pseudocode or algorithm blocks.
Open Source Code No The paper does not provide any information about the availability of open-source code for the methodology described.
Open Datasets Yes We use the data made available by Propublica and described in Angwin et al. (2016).
Dataset Splits No The paper mentions generating a synthetic dataset of size 5,000 with 100 bootstrap replications, but does not specify train/validation/test splits for either the synthetic or real data experiments.
Hardware Specification No The paper does not provide any specific details about the hardware used to run the experiments.
Software Dependencies No The paper mentions using the 'R package nloptr' but does not specify version numbers for R or nloptr, which is required for reproducibility.
Experiment Setup No The paper describes some parameters related to fairness constraints and utility functions (e.g., 'PSEsy is 1.918... and is restricted to lie between 0.1 and 0.1'), and details of data generation. However, it does not provide specific hyperparameters for the learning algorithms (Q-learning, value search) such as learning rates, batch sizes, or optimizer settings, which are crucial for replicating the experimental setup.