Unrestricted Adversarial Examples via Semantic Manipulation

Authors: Anand Bhattad, Min Jin Chong, Kaizhao Liang, Bo Li, D. A. Forsyth

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental In this work, we propose unrestricted attack strategies that explicitly manipulate semantic visual representations to generate natural-looking adversarial examples that are far from the original image in tems of the Lp norm distance. ... We conduct our experiments on Image Net Deng et al. (2009) by randomly selecting images from 10 sufficiently different classes predicted correctly for the classification attack. ... In addition, we conduct comprehensive user studies to show that our generated semantic adversarial examples are photorealistic to humans despite large magnitude perturbations when compared to other attacks.
Researcher Affiliation Academia Anand Bhattad Min Jin Chong Kaizhao Liang Bo Li D. A. Forsyth University of Illinois at Urbana-Champaign {bhattad2, mchong6, kl2, lbo, daf}@illinois.edu
Pseudocode No The paper presents mathematical objective functions (e.g., Eq. 1, Eq. 2, Eq. 3, Eq. 4, Eq. 5, Eq. 6) but does not include any pseudocode or algorithm blocks.
Open Source Code No The paper does not include any explicit statement or link indicating that the source code for the proposed methods is publicly available.
Open Datasets Yes We conduct our experiments on Image Net Deng et al. (2009) by randomly selecting images from 10 sufficiently different classes predicted correctly for the classification attack.
Dataset Splits No The paper mentions using ImageNet and MSCOCO datasets and describes random selection of images but does not provide specific details on train/validation/test splits, such as percentages or absolute sample counts.
Hardware Specification No The paper mentions using pretrained models like ResNet50, DenseNet121, and VGG19 but does not specify the hardware (e.g., GPU models, CPU types) used to run the experiments.
Software Dependencies No The paper mentions optimizers like Adam and L-BFGS, and models like VGG19, but does not provide specific version numbers for software dependencies or libraries (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup Yes For our experiments, we use Adam Optimizer (Kingma & Ba (2014)) with a learning rate of 10 4 in c Adv. ... Empirically, from our experiments we find that in terms of color diversity, realism, and robustness of attacks, using k = 4 and 50 hints gives us better adversarial examples. For the rest of this paper, we fix 50 hints for all c Advk methods. ... Empirically, we found setting α to be in the range [150, 1000] and β in the range 10 4, 10 3 to be successful and also produce less perceptible t Adv examples.