Attacking deep networks with surrogate-based adversarial black-box methods is easy
Authors: Nicholas A. Lord, Romain Mueller, Luca Bertinetto
ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | 3 EXPERIMENTS, Experimental setup: We compare GFCS to the methods of Cheng et al. (2019); Tashiro et al. (2020); Yang et al. (2020) by designing an experimental framework covering the key aspects of the original experiments in the respective source works. Results: Table 1 reports attack success rates and median query counts, and Fig. 2 plots cumulative success counts against the maximum queries spent per example (CDFs, modulo normalisation). |
| Researcher Affiliation | Industry | Nicholas A. Lord, Romain Mueller & Luca Bertinetto www.five.ai {nick,romain.mueller,luca.bertinetto}@five.ai |
| Pseudocode | Yes | Algorithm 1 GFCS: Gradient First, Coimage Second |
| Open Source Code | Yes | Code is available at https://github.com/fiveai/GFCS. We accompany this submission with the code implementing the proposed GFCS method. |
| Open Datasets | Yes | We use each method to perform ℓ2-norm-constrained untargeted attacks against the same 2000 randomly chosen correctly classified ILSVRC2012 validation images per victim network. using CIFAR-10 as the dataset |
| Dataset Splits | Yes | We use each method to perform ℓ2-norm-constrained untargeted attacks against the same 2000 randomly chosen correctly classified ILSVRC2012 validation images per victim network. We chose 2.0 as the default value for our experiments by performing a small grid search over a held-out set (disjoint from the 2000 examples used in the experiments of the main paper). |
| Hardware Specification | No | The paper mentions training on 'GPUs' implicitly through the context of deep learning, but does not specify any particular models (e.g., NVIDIA A100, RTX 2080 Ti) or other specific hardware components used for running experiments. |
| Software Dependencies | No | All networks used are pretrained models available via Py Torch/torchvision. |
| Experiment Setup | Yes | A maximum query count of 10000 is set per example (beyond which failure is declared), and the ℓ2 bound (enforced using PGA) is set to the commonly chosen 0.001D, where D is the image dimension in the victim network s native input resolution. We chose 2.0 as the default value for our experiments by performing a small grid search over a held-out set... |