Combating Adversaries with Anti-adversaries
Authors: Motasem Alfarra, Juan C. Perez, Ali Thabet, Adel Bibi, Philip H.S. Torr, Bernard Ghanem5992-6000
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We conduct large-scale experiments from black-box to adaptive attacks on CIFAR10, CIFAR100, and Image Net. Our layer significantly enhances model robustness while coming at no cost on clean accuracy. |
| Researcher Affiliation | Collaboration | 1 King Abdullah University of Science and Technology (KAUST), 2 Facebook Reality Labs, 3 University of Oxford |
| Pseudocode | Yes | Algorithm 1: Anti-adversary classifier g |
| Open Source Code | Yes | 1Official code: https://github.com/Motasem Alfarra/Combating Adversaries-with-Anti-Adversaries |
| Open Datasets | Yes | on CIFAR10, CIFAR100 (Krizhevsky and Hinton 2009) and Image Net (Krizhevsky, Sutskever, and Hinton 2012). |
| Dataset Splits | No | The paper mentions using CIFAR10, CIFAR100, and ImageNet and specifies test set sizes (e.g., '1000 and 500 instances of CIFAR10 and Image Net, respectively' for test accuracy), but it does not provide explicit details about the training/validation splits or their percentages, or how those splits were generated (e.g., random seed, stratified). |
| Hardware Specification | No | The paper does not provide specific details about the hardware used to run the experiments (e.g., specific GPU or CPU models, memory, or cloud computing instance types). |
| Software Dependencies | No | The paper does not provide specific software dependencies with version numbers (e.g., Python, deep learning frameworks like PyTorch or TensorFlow, or other libraries). |
| Experiment Setup | Yes | In all experiments, we do not retrain fθ after prepending our anti-adversary layer. We set K = 2 and α = 0.15 whenever Algorithm 1 is used, unless stated otherwise. |