Two Simple Ways to Learn Individual Fairness Metrics from Data
Authors: Debarghya Mukherjee, Mikhail Yurochkin, Moulinath Banerjee, Yuekai Sun
ICML 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We show empirically that fair training with the learned metrics leads to improved fairness on three machine learning tasks susceptible to gender and racial biases. |
| Researcher Affiliation | Collaboration | 1Department of Statistics, University of Michigan 2IBM Research, MIT-IBM Watson AI Lab. |
| Pseudocode | Yes | Algorithm 1 estimating ran(A ) by factor analysis |
| Open Source Code | Yes | 1Codes are available at https://github.com/mdebumich/Fair_metric_learning. |
| Open Datasets | Yes | For the set of comparable samples for FACE we choose embeddings of a side dataset of 1200 popular baby names in New York City3. (Footnote 3: available from https://catalog.data.gov/dataset/) on the adult dataset (Bache & Lichman, 2013). |
| Dataset Splits | No | The paper uses the Adult dataset and a dataset of baby names, but it does not explicitly specify the training, validation, and test splits (e.g., percentages or sample counts) used for its experiments within the text. |
| Hardware Specification | No | The paper describes the methods and computational results but does not provide specific hardware details (e.g., GPU/CPU models, memory, or cloud instance types) used for running the experiments. |
| Software Dependencies | No | The paper mentions methods like Sen SR and logistic regression but does not provide specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow, or specific library versions) used for the experiments. |
| Experiment Setup | No | The paper mentions using stochastic gradient descent (SGD) with a step size parameter (ηt) for EXPLORE and varying factors (3, 10, 50) for FACE. However, it does not provide concrete hyperparameter values such as specific learning rates, batch sizes, number of epochs, or detailed optimizer settings for its experiments. |