Rényi Fair Inference

Authors: Sina Baharlouei, Maher Nouiehed, Ahmad Beirami, Meisam Razaviyayn

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental To demonstrate the performance of the proposed R enyi fair inference framework in practice, we compare it with wellknown existing methods on several benchmark datasets. Experiments indicate that the proposed method has favorable empirical performance against state-of-the-art approaches.
Researcher Affiliation Academia Sina Baharlouei Industrial and Systems Engineering, USC baharlou@usc.edu Maher Nouiehed Industrial Engineering and Management, AUB mn102@aub.edu.lb Ahmad Beirami EECS, MIT beirami@mit.edu Meisam Razaviyayn Industrial and Systems Engineering, USC razaviya@usc.edu
Pseudocode Yes Algorithm 1 R enyi Fair Classifier for Discrete Sensitive Attributes (...) Algorithm 2 R enyi Fair Classifier for Binary Sensitive Attributes (...) Algorithm 3 R enyi Fair K-means
Open Source Code No The paper does not provide a direct link to a code repository or explicitly state that the source code for their methodology is publicly available.
Open Datasets Yes In this section, we evaluate the performance of the proposed R enyi fair classifier and R enyi fair kmeans algorithm on three standard datasets: Bank, German Credit, and Adult datasets. The detailed description of these datasets is available in the supplementary material. All of these datasets are publicly available at UCI repository.
Dataset Splits Yes German Credit Dataset: (...) We chose first 800 customers as the training data, and last 200 customers as the test data. Bank Dataset: (...) we split data into the training (32000 data points), and test set (13211 data points). Adult Dataset: (...) The train and test sets are two separated files consisting of 32000 and 16000 samples respectively.
Hardware Specification No The paper does not specify the exact hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions implementing a "logistic regression classifier" and a "2-layers neural network" but does not specify any software names with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes In this section, we train a 2-layers neural network on the adult dataset regularized by the R enyi correlation. In this experiment, the sensitive attribute is gender. We set the number of nodes in the hidden layer, the batch-size, and the number of epochs to 12, 128, and 50, respectively.