Statistical inference for individual fairness

Authors: Subha Maity, Songkai Xue, Mikhail Yurochkin, Yuekai Sun

ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In our experiments we first verify our methodology in simulations and then present a case-study of testing individual fairness on the Adult dataset (Dua & Graff, 2017). We demonstrate the utility of our tools in a real-world case study.
Researcher Affiliation Collaboration Subha Maity Department of Statistics University of Michigan smaity@umich.edu Songkai Xue Department of Statistics University of Michigan sxue@umich.edu Mikhail Yurochkin IBM Research MIT-IBM Watson AI lab mikhail.yurochkin@ibm.com Yuekai Sun Department of Statistics University of Michigan yuekai@umich.edu
Pseudocode Yes Algorithm 1 Individual fairness testing
Open Source Code No Codes for Sen SR (Yurochkin et al., 2020) is provided with submission with a demonstration for fitting the model, where the choice of hyperparameters are provided. The codes can also be found in https://github.com/fairlearn/fairlearn. There is no explicit statement or link for the code implementing *their* specific individual fairness testing tools, only for components used or for anonymous review.
Open Datasets Yes Adult dataset (Dua & Graff, 2017)., and the corresponding reference: Dua & Graff. UCI machine learning repository, 2017. URL http://archive.ics.uci.edu/ml. Also COMPAS recidivism prediction dataset (Larson et al., 2016).
Dataset Splits No For each model, 10 random train/test splits of the dataset is used, where we use 80% data for training purpose. This statement describes train/test splits but does not mention a validation split.
Hardware Specification No The paper does not provide any specific details about the hardware (e.g., CPU, GPU models, or cloud instances) used for running the experiments.
Software Dependencies No The paper mentions 'Adam optimizer' and links to the 'fairlearn' library on GitHub, but it does not specify any software names with version numbers for reproducibility.
Experiment Setup Yes For both the models same parameters are involved: learning_rate for step size for Adam optimizer, batch_size for mini-batch size at training time, and num_steps for number of training steps to be performed. We present the choice of hyperparameters in Table 2. Table 2: Parameters learning_rate batch_size num_steps Choice 10 4 250 8K. For each of the models we choose regularizer λ = 50, number of steps T = 500 and step size ϵt = 0.01.