Meta-Learning the Search Distribution of Black-Box Random Search Based Adversarial Attacks
Authors: Maksym Yatsura, Jan Metzen, Matthias Hein
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform an empirical evaluation of Meta Square Attack (MSA). First, we consider the data distribution D of CIFAR10 [48] images and a classifier distribution F consisting of the classifiers robust with respect to the ℓ -threat model. We use this setting for the meta-training as discussed in Section 4.1. We further consider how the controllers trained for these distributions generalize to working with the other data distributions of CIFAR100 and Image Net and corresponding distributions of classifiers defined on this data. |
| Researcher Affiliation | Collaboration | Maksym Yatsura Bosch Center for Artificial Intelligence University of Tübingen maksym.yatsura@de.bosch.com Jan Hendrik Metzen Bosch Center for Artificial Intelligence janhendrik.metzen@de.bosch.com Matthias Hein University of Tübingen matthias.hein@uni-tuebingen.de |
| Pseudocode | No | The paper describes the iterative procedure for adversarial perturbation in Equation (3) and other methodological details in text, but it does not include a clearly labeled 'Pseudocode' or 'Algorithm' block. |
| Open Source Code | Yes | 1The code is available at https://github.com/boschresearch/meta-rs |
| Open Datasets | Yes | We perform an empirical evaluation of Meta Square Attack (MSA). First, we consider the data distribution D of CIFAR10 [48] images... We further consider how the controllers trained for these distributions generalize to working with the other data distributions of CIFAR100 and Image Net... |
| Dataset Splits | Yes | Meta-training was run on a set D consisting of 1000 images from CIFAR10 test set (different from the ones used in evaluation of controllers in the next subsection) |
| Hardware Specification | Yes | All computations including meta-training and evaluation of the controllers were performed on a single Nvidia Tesla V100-32GB GPU. |
| Software Dependencies | No | The paper mentions 'Adam optimizer', 'advertorch [49] package', and cites 'Pytorch [74]' but does not provide specific version numbers for these software components. |
| Experiment Setup | Yes | For both update size and color controllers, we use MLP architectures with 2 hidden layers, 10 neurons each, and Re LU activations. We purposefully did not finetune the MLP architecture. Metatraining was run on a set D consisting of 1000 images from CIFAR10 test set (different from the ones used in evaluation of controllers in the next subsection) and Square Attack with a query budget of 1000 iterations. Therefore, controller behaviour on query regimes higher than 1000 are obtained by extrapolation of the behaviour learned for 1000 iterations. Both controllers were trained simultaneously for 10 epochs using Adam optimizer with batch size 100 and cosine step size schedule [50] with learning rate 0.03. |