Active fairness auditing
Authors: Tom Yan, Chicheng Zhang
ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Finally, in Appendix F, we empirically explore the performance of Algorithm 3 and active learning, and compare them with i.i.d sampling. As expected, our experiments confirm that under a fixed budget, Algorithm 3 is most effective at inducing a version space with a small µ-diameter, and can thus provide the strongest manipulation-proofness guarantee. and F. Experiments |
| Researcher Affiliation | Academia | 1Carnegie Mellon University 2University of Arizona. |
| Pseudocode | Yes | Algorithm 1 Minimax optimal deterministic auditing and Algorithm 3 Oracle-efficient Active Fairness Auditing |
| Open Source Code | No | The paper does not include an unambiguous statement that the authors are releasing their code for the work described, nor does it provide a direct link to a source-code repository. |
| Open Datasets | Yes | The first is COMPAS (Larson et al., 2016), where the two groups are defined to be Caucasian and non-Caucasian. And the second is the Student Performance Dataset |
| Dataset Splits | No | The paper mentions training a model on the COMPAS and Student Performance datasets but does not explicitly provide details about training, validation, or test dataset splits (e.g., percentages, sample counts, or cross-validation methodology). |
| Hardware Specification | No | The paper describes the experimental setup and results but does not provide any specific hardware details such as GPU/CPU models, memory, or cloud computing instance types used for running the experiments. |
| Software Dependencies | No | The paper mentions training a logistic regression model but does not provide specific software dependencies with version numbers, such as programming languages, libraries, or frameworks (e.g., Python, PyTorch, scikit-learn versions). |
| Experiment Setup | No | The paper mentions training a logistic regression model but does not provide specific experimental setup details such as hyperparameter values (e.g., learning rate, batch size, number of epochs) or optimizer settings. |