On the Effects of Fairness to Adversarial Vulnerability
Authors: Cuong Tran, Keyu Zhu, Pascal Van Hentenryck, Ferdinando Fioretto
IJCAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on non-linear models and different architectures validate the theoretical findings.This section empirically validates the theoretical insights discussed earlier, extending them to more complex architectures, datasets, and loss functions. |
| Researcher Affiliation | Academia | Cuong Tran1 , Keyu Zhu2 , Pascal Van Hentenryck2 and Ferdinando Fioretto1 1University of Virginia 2Georgia Institute of Technology |
| Pseudocode | No | The paper describes methods in prose but does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide an explicit statement or link for open-sourcing the code for the described methodology. |
| Open Datasets | Yes | Datasets. The experiments of this section focus on three vision datasets: UTK-Face [Zhang et al., 2017], FMNIST [Xiao et al., 2017] and CIFAR-10 [Krizhevsky et al., 2009]. |
| Dataset Splits | No | The paper mentions using UTK-Face, FMNIST, and CIFAR-10 datasets and refers to 'standard labels' for some, but does not provide specific percentages or counts for train/validation/test splits. |
| Hardware Specification | No | The paper does not provide any specific hardware details (e.g., GPU or CPU models, memory) used for running the experiments. |
| Software Dependencies | No | The paper mentions specific software like 'Torchattacks' by reference but does not provide specific version numbers for software dependencies used in their own experimental setup. |
| Experiment Setup | Yes | Models trained on the UTK-Face data use a learning rate of 1e 3 and 70 epochs. Those trained on FMNIST and CIFAR, use a learning rate of 1e 1 and 200 epochs, as suggested in previous work [Xu et al., 2021a]. |