Adversarial Turing Patterns from Cellular Automata
Authors: Nurislam Tursynbek, Ilya Vilkoviskiy, Maria Sindeeva, Ivan Oseledets2683-2691
AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Furthermore, we propose to use Turing patterns, generated by cellular automata, as universal perturbations, and experimentally show that they significantly degrade the performance of deep learning models. |
| Researcher Affiliation | Academia | Nurislam Tursynbek1, Ilya Vilkoviskiy1, Maria Sindeeva1, Ivan Oseledets1 1Skolkovo Institute of Science and Technology |
| Pseudocode | No | The paper provides mathematical equations and descriptions of processes (e.g., Equation 4 for Boyd iteration, Equation 12 for cellular automata update rule), but it does not include a block explicitly labeled 'Pseudocode' or 'Algorithm'. |
| Open Source Code | Yes | The source code is available at https://github.com/Nurislam T/adv Turing. |
| Open Datasets | Yes | For all approaches the target function is optimized on 512 random images from Image Net (Russakovsky et al. 2015) train dataset, and perturbation is constrained as ε 10. Following (Meunier, Atif, and Teytaud 2019), we used evolutionary algorithm CMA-ES (Hansen, M uller, and Koumoutsakos 2003) by Nevergrad (Rapin and Teytaud 2018) as a black-box optimization algorithm. Fooling rates are calculated for 10000 Imagenet validation images for torchvision pretrained models(VGG19 (Simonyan and Zisserman 2014), Inception V3 (Szegedy et al. 2016), Mobilenet V2 (Sandler et al. 2018)). |
| Dataset Splits | Yes | For all approaches the target function is optimized on 512 random images from Image Net (Russakovsky et al. 2015) train dataset, and perturbation is constrained as ε 10. [...] Fooling rates are calculated for 10000 Imagenet validation images for torchvision pretrained models(VGG19 (Simonyan and Zisserman 2014), Inception V3 (Szegedy et al. 2016), Mobilenet V2 (Sandler et al. 2018)). |
| Hardware Specification | No | The paper does not provide any specific hardware details such as CPU models, GPU models, or memory specifications used for conducting the experiments. |
| Software Dependencies | No | Following (Meunier, Atif, and Teytaud 2019), we used evolutionary algorithm CMA-ES (Hansen, M uller, and Koumoutsakos 2003) by Nevergrad (Rapin and Teytaud 2018) as a black-box optimization algorithm. Fooling rates are calculated for 10000 Imagenet validation images for torchvision pretrained models(VGG19 (Simonyan and Zisserman 2014), Inception V3 (Szegedy et al. 2016), Mobilenet V2 (Sandler et al. 2018)). |
| Experiment Setup | Yes | For all approaches the target function is optimized on 512 random images from Image Net (Russakovsky et al. 2015) train dataset, and perturbation is constrained as ε 10. Following (Meunier, Atif, and Teytaud 2019), we used evolutionary algorithm CMA-ES (Hansen, M uller, and Koumoutsakos 2003) by Nevergrad (Rapin and Teytaud 2018) as a black-box optimization algorithm. [...] Simple CA. Here, we fix the kernel Y in Equation (12) to be L L (we find that L = 13 produces best results), with elements filled by 1, except for the inner central rectangle of size l1 l2 with constant elements such that the sum of all elements of Y is 0. Besides l1 and l2, the initialization of n(i, j) from Eq. (12) is added as parameters. To reduce black-box queries, initialization is chosen to be 7 7 square tiles (size 32 32) (Meunier, Atif, and Teytaud 2019) for each of the 3 maps representing each image channel. |