AVA: Adversarial Vignetting Attack against Visual Recognition
Authors: Binyu Tian, Felix Juefei-Xu, Qing Guo, Xiaofei Xie, Xiaohong Li, Yang Liu
IJCAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We validate the proposed methods on three popular datasets, i.e., DEV, CIFAR10, and Tiny Image Net, by attacking four CNNs, e.g., Res Net50, Efficient Net-B0, Dense Net121, and Mobile Net-V2, demonstrating the advantages of our methods over baseline methods on both transferability and image quality. |
| Researcher Affiliation | Collaboration | Binyu Tian1 , Felix Juefei-Xu2 , Qing Guo3 , Xiaofei Xie3 , Xiaohong Li1 , Yang Liu3 1 College of Intelligence and Computing, Tianjin University, China 2 Alibaba Group, USA 3 Nanyang Technological University, Singapore |
| Pseudocode | Yes | We summarize the workflow of our attacking algorithm in the following steps: Initialize the parameters P = {f 1, α, τ, χ} = {1, 0, 0, 0}, the geometry vignetting matrix G as 1 αR, and the distance matrix R via R[i] = p u2 i + v2 i . Calculate the illumination-related matrix A via Eq. (2), and the camera tilting-related matrix T via Eq (4). At the t-th iteration, calculate the gradient of Gt, Pt with respect to the objective function Eq. (8) and obtain Gt and { ρt|ρt Pt}. Update Gt and Pt with their own step sizes. Update t = t + 1 and go to the step three for further optimization until it reaches the maximum iteration or vig(I, P) fools the DNN. |
| Open Source Code | No | The paper does not provide an explicit statement about open-sourcing the code for the described methodology or a link to a code repository. |
| Open Datasets | Yes | We carry out our experiments on three popular datasets, i.e., DEV [Google, 2017], CIFAR10 [Krizhevsky and Hinton, 2009], and Tiny Image Net [Stanford, 2017]. |
| Dataset Splits | Yes | We train these models on the CIFAR10 and Tiny Image Net dataset. For DEV dataset, we use the pretrained models. |
| Hardware Specification | No | The paper does not explicitly describe the specific hardware (e.g., GPU models, CPU types) used to run its experiments. |
| Software Dependencies | No | The paper mentions the use of 'state-of-the-art deep convolutional neural networks (CNNs)' and specific models like 'Res Net50, Efficient Net-B0, Dense Net121, and Mobile Net-V2', but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | In the experimental parts, we set our hyper-parameters as follows: we set the stepsize of f, α, τ, χ and Gt as 0.0125, 0.0125, 0.01, 0.01 and 0.0125, respectively. We set the number of iterations to be 40 and z of the level-set method to be 1.0. We set p to be , and set the ϵ of f 1, α, τ, and χ as 0.5, 0.5, π/6, and π/6. In addition, we set λf, λg and λα all to be 1. |