Minimax AUC Fairness: Efficient Algorithm with Provable Convergence

Authors: Zhenhuan Yang, Yan Lok Ko, Kush R. Varshney, Yiming Ying

AAAI 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct numerical experiments on both synthetic and real-world datasets to validate the effectiveness of the minimax framework and the proposed optimization algorithm.
Researcher Affiliation Collaboration 1Etsy, Inc, Brooklyn, New York, USA 2University at Albany, State University of New York, Albany, New York, USA 3IBM Research, Yorktown Heights, New York, USA
Pseudocode Yes Algorithm 1: Minimax Fair AUC
Open Source Code Yes Implementation Details3 https://github.com/zhenhuan-yang/Minimax Fair AUC.
Open Datasets Yes We evaluate our algorithms on four datasets that have been commonly used in the fair machine learning literature (Zafar et al. 2017; Donini et al. 2018). ... The Adult dataset ... The Bank dataset ... The Compas dataset ... The Default dataset (Yeh and Lien 2009).
Dataset Splits Yes We partition the datasets to training, validation and testing in the ratio 60%:20%:20%.
Hardware Specification No The paper describes the models and datasets used but does not provide specific details on the hardware (e.g., GPU/CPU models, memory) used for the experiments.
Software Dependencies No The paper does not provide specific software dependencies with version numbers (e.g., specific libraries or frameworks like PyTorch, TensorFlow, or scikit-learn with their versions).
Experiment Setup Yes We partition the datasets to training, validation and testing in the ratio 60%:20%:20%. The batch size |B|, initial stepsizes ηθ 0, ηλ 0 and other hyperparameters are chosen based on the validation set. For Algorithm 1, early stopping is implemented based on the maximum group loss over the validation set.