On the Tradeoff Between Robustness and Fairness
Authors: Xinsong Ma, Zekai Wang, Weiwei Liu
NeurIPS 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Our extensive experimental results provide an affirmative answer to this question: with an increasing perturbation radius, stronger AT will lead to a larger class-wise disparity of robust accuracy. and This section explores the relations between the perturbation radius in AT and the variance of classwise robust accuracy, and studies whether there is a tradeoff between average robust accuracy and class-wise disparity of robust accuracy through a series of experiments. and In this section, we present the experimental results to validate the effectiveness of FAT for mitigating the tradeoff between average robustness and robust fairness. |
| Researcher Affiliation | Academia | Xinsong Ma School of Computer Science Wuhan University maxinsong1018@gmail.com Zekai Wang School of Computer Science Wuhan University wzekai99@gmail.com Weiwei Liu School of Computer Science Wuhan University liuweiwei863@gmail.com |
| Pseudocode | No | The paper describes the Fairly Adversarial Training (FAT) method mathematically but does not provide an explicit pseudocode or algorithm block. |
| Open Source Code | Yes | Our code can be found on Git Hub at https://github.com/wzekai99/FAT. |
| Open Datasets | Yes | We conduct our experiments on the benchmark datasets CIFAR-10 and CIFAR-100 [15]. |
| Dataset Splits | No | The paper mentions using CIFAR-10 and CIFAR-100 for training, but does not explicitly state the dataset splits (e.g., percentages or sample counts for train/validation/test sets) within the provided text. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU model, CPU type) used for running its experiments. |
| Software Dependencies | No | The paper mentions using SGD for optimization, but does not provide specific version numbers for software dependencies or libraries. |
| Experiment Setup | Yes | The maximum PGD step and step size are set to 20 and ϵtrain/10, respectively. For optimization, we use SGD with 0.9 momentum for 120 epochs. The initial learning rate is set to 0.1 and is divided by 10 at epoch 60 and epoch 80, respectively. |