The Implicit Fairness Criterion of Unconstrained Learning
Authors: Lydia T. Liu, Max Simchowitz, Moritz Hardt
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, we verify our theoretical findings with experiments on two well-known datasets, demonstrating the effectiveness of unconstrained learning in achieving approximate calibration with respect to multiple group attributes simultaneously. (Section 1) |
| Researcher Affiliation | Academia | Department of Electrical Engineering and Computer Sciences, University of California, Berkeley, USA. |
| Pseudocode | No | The paper does not contain any pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include any explicit statement about releasing source code or a link to a code repository. |
| Open Datasets | Yes | These are the Adult dataset from the UCI Machine Learning Repository (Dua and Karra Taniskidou, 2017) and a dataset of pretrial defendants from Broward County, Florida (Angwin et al., 2016; Dressel and Farid, 2018) (Section 3). |
| Dataset Splits | Yes | Score functions are obtained by logistic regression on a training set that is 80% of the original dataset, using all available features, unless otherwise stated. (Section 3) |
| Hardware Specification | No | The paper does not specify any hardware details (e.g., GPU models, CPU types, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions that score functions are obtained by 'logistic regression' but does not specify any software names with version numbers (e.g., Python, PyTorch, scikit-learn, etc.). |
| Experiment Setup | Yes | Score functions are obtained by logistic regression on a training set that is 80% of the original dataset, using all available features, unless otherwise stated. (Section 3). In Figure 6 (top), we implicitly restrict the model class by varying the regularization parameter: with a smaller parameters corresponding to more severe regularization, constraining the learned weights to be inside a smaller L1 ball. (Section 3.3) |