Fairness Without Demographics in Repeated Loss Minimization

Authors: Tatsunori Hashimoto, Megha Srivastava, Hongseok Namkoong, Percy Liang

ICML 2018 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate that DRO prevents disparity amplification on examples where ERM fails, and show improvements in minority group user satisfaction in a real-world text autocomplete task.
Researcher Affiliation Academia 1Department of Computer Science, Stanford, USA 2Department of Statistics, Stanford, USA 3Management Science & Engineering, Stanford, USA. Correspondence to: Tatsunori Hashimoto <thashim@stanford.edu>.
Pseudocode No The paper describes procedures but does not include formal pseudocode or algorithm blocks.
Open Source Code Yes Reproducibility: Code to generate results available on the Coda Lab platform at https://bit.ly/2sFkDpE.
Open Datasets No The paper describes creating a corpus of tweets for the autocomplete task but does not provide a specific link, DOI, or formal citation for this specific corpus's public availability. It mentions using 'tweets built from two estimated demographic groups, African Americans and White Americans (Blodgett et al., 2016)' but this citation is for the method of group estimation, not the dataset itself used in the experiment.
Dataset Splits No The paper mentions 'held out AAE tweets or SAE tweets' for evaluation and 'train a set of five maximum likelihood bigram language models on a corpus', but does not provide specific percentages or counts for training, validation, and test splits needed for reproduction.
Hardware Specification No The paper does not provide specific hardware details such as GPU models, CPU types, or memory used for the experiments.
Software Dependencies No The paper does not mention specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x).
Experiment Setup Yes The DRO model is trained using the dual objective with logistic loss, and η = 0.95, which was the optimal dual solution to αmin = 0.2. At each round we fit a logistic regression classifier using ERM or DRO and gradient descent, constraining the norm of the weight vector to 1.