Fairness-Aware Learning for Continuous Attributes and Treatments

Authors: Jeremie Mary, Clément Calauzènes, Noureddine El Karoui

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experiments show favorable comparisons to state of the art on binary variables and prove the ability to protect continuous ones
Researcher Affiliation Collaboration 1Criteo AI Lab, Paris, France 2University of California, Berkeley, USA.
Pseudocode No No pseudocode or clearly labeled algorithm block was found.
Open Source Code Yes 1Code available at https://github.com/ criteo-research/continuous-fairness
Open Datasets Yes They propose to use 5 publicly available datasets: Arrhythmia, COMPAS, Adult, German, and Drug. A description of the datasets as well as the variable to protect is provided in the supplementary material of (Donini et al., 2018).", "The dataset is available here: https://archive.ics. uci.edu/ml/datasets/Communities+and+Crime
Dataset Splits Yes All the experiments are a result of a 10-fold crossvalidation for which each fold is averaged over 20 random initializations (as the objective is non-convex).
Hardware Specification No No specific hardware details (like exact GPU/CPU models, memory, or cloud instance types) used for running the main experiments are provided. The paper only mentions 'On a recent laptop' for a timing comparison, which is not the main experimental setup.
Software Dependencies No The paper mentions 'pytorch implementation' and 'Adam' optimizer but does not specify version numbers for any software dependencies.
Experiment Setup Yes Loss is crossentropy, gradient is Adam (Kingma & Ba, 2014) and learning rate is from values from 10 2, 10 4, 3 10 4. Batch size is chosen from {8, 16, 32, 64, 128}.", "We used a simple neural net NN for these experiments: two hidden layers (first layer is from 30 to 100 neurons depending on the size of the data set, the second being 20 neurons smaller than the first one). Non linearities are SELU (Klambauer et al., 2017).", "\u03bb is set to 4 R enyi batch size/batch size.", "stochastic optimizer (Adam) with mini-batches of size n = 200.", "All the methods are optimized using L-BFGS."