Fairness for Robust Log Loss Classification
Authors: Ashkan Rezaei, Rizal Fathony, Omid Memarrast, Brian Ziebart5511-5518
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We then demonstrate the practical advantages of our approach on three benchmark fairness datasets. We evaluate our proposed algorithm on three benchmark fairness datasets: (1) The UCI Adult (Dheeru and Karra Taniskidou 2017) dataset includes 45,222 samples... (2) The Pro Publica’s COMPAS recidivism dataset (Larson et al. 2016) contains 6,167 samples... (3) The dataset from the Law School Admissions Council’s National Longitudinal Bar Passage Study (Wightman 1998) has 20,649 examples. |
| Researcher Affiliation | Academia | Ashkan Rezaei,1 Rizal Fathony,2 Omid Memarrast,1 Brian Ziebart1 1Department of Computer Science, University of Illinois at Chicago 2School of Computer Science, Carnegie Mellon University {arezae4, omemar2, bziebart}@uic.edu, rfathony@cs.cmu.edu |
| Pseudocode | No | We refer to the supplementary material for the detailed algorithm. and We refer the reader to the supplementary material for details. indicating pseudocode is not in the main paper. |
| Open Source Code | No | No explicit statement or link was found indicating the availability of the authors' own open-source code for the methodology described in this paper. |
| Open Datasets | Yes | We evaluate our proposed algorithm on three benchmark fairness datasets: (1) The UCI Adult (Dheeru and Karra Taniskidou 2017) dataset includes 45,222 samples... (2) The Pro Publica’s COMPAS recidivism dataset (Larson et al. 2016) contains 6,167 samples... (3) The dataset from the Law School Admissions Council’s National Longitudinal Bar Passage Study (Wightman 1998) has 20,649 examples. |
| Dataset Splits | Yes | We perform all of our experiments using 20 random splits of each dataset into a training set (70% of examples) and a testing set (30%). We cross validate our model on a separate validation set using the best logloss to select an L2 penalty from ({.001, .005, .01, .05, .1, .2, .3, .4, .5}). |
| Hardware Specification | No | No specific hardware details (e.g., CPU/GPU models, memory, or cloud instance types) used for experiments were mentioned in the paper. |
| Software Dependencies | No | No specific software dependencies with version numbers (e.g., Python 3.x, PyTorch 1.x) were explicitly stated in the paper. It only mentions 'scikit-learn' without a version. |
| Experiment Setup | Yes | We cross validate our model on a separate validation set using the best logloss to select an L2 penalty from ({.001, .005, .01, .05, .1, .2, .3, .4, .5}). |