Fairness with Adaptive Weights
Authors: Junyi Chai, Xiaoqian Wang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | In experiments, our method achieves comparable or better performance than state-of-the-art methods in both classification and regression tasks. Furthermore, our method exhibits robustness to label noise on various benchmark datasets. |
| Researcher Affiliation | Academia | 1Elmore Family School of Electrical and Computer Engineering, Purdue University. |
| Pseudocode | Yes | Algorithm 1 Adaptive Reweighing Algorithm |
| Open Source Code | No | The paper does not contain an explicit statement or link indicating that the source code for their method is publicly available. |
| Open Datasets | Yes | We evaluate our model on three benchmark classification datasets: Adult dataset (Dua & Graff, 2017), the UCI German credit risk dateset (Dua & Graff, 2017), the Pro Publica COMPAS dataset (Larson et al., 2016), and two regression datasets: Law School (Wightman, 1998), Communities & Crime (CRIME) dataset. |
| Dataset Splits | Yes | We repeat experiments on each dataset five times and before each repetition we randomly spilt data into 80% training data and 20% test data. ... Values of hyperparameter α in our method are set by performing cross-validation on training data in the value range of 1 to 20. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU/CPU models, memory specifications, or cloud computing instance types used for running experiments. |
| Software Dependencies | No | The paper mentions software components like 'logistic regression', 'Regularized Least Squares (RLS)', and 'multilayer perceptron (MLP)', but does not provide specific version numbers for these or any other software dependencies. |
| Experiment Setup | No | The paper states that 'Values of hyperparameter α in our method are set by performing cross-validation on training data in the value range of 1 to 20' and 'The hyperparameters for the comparing methods are tuned as suggested by the authors', but it does not provide the specific numerical values of these hyperparameters or other training configurations (e.g., learning rate, batch size, number of epochs) used in their experiments. |