Adversarial examples in the physical world

Authors: Alexey Kurakin, Ian J. Goodfellow, Samy Bengio

ICLR 2017 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We demonstrate this by feeding adversarial images obtained from a cell-phone camera to an Image Net Inception classifier and measuring the classification accuracy of the system. To investigate the extent to which adversarial examples survive in the physical world, we conducted an experiment with a pre-trained Image Net Inception classifier (Szegedy et al., 2015). Results of the photo transformation experiment are summarized in Tables 1, 2 and 3.
Researcher Affiliation Industry Alexey Kurakin Google Brain kurakin@google.com Ian J. Goodfellow Open AI ian@openai.com Samy Bengio Google Brain bengio@google.com
Pseudocode No The paper describes methods using mathematical equations but does not include structured pseudocode or algorithm blocks.
Open Source Code No The paper mentions the open source Tensor Flow camera demo available at https://github.com/ tensorflow/tensorflow/tree/master/tensorflow/examples/android, but this refers to a third-party tool used, not the authors' own implementation code for their methodology.
Open Datasets Yes The experiments were performed on all 50, 000 validation samples from the Image Net dataset (Russakovsky et al., 2014) using a pre-trained Inception v3 classifier (Szegedy et al., 2015).
Dataset Splits Yes The experiments were performed on all 50, 000 validation samples from the Image Net dataset (Russakovsky et al., 2014) using a pre-trained Inception v3 classifier (Szegedy et al., 2015). For this set of experiments we used a subset of 1, 000 images randomly selected from the validation set.
Hardware Specification No The paper mentions a 'cell-phone camera (Nexus 5x)' for capturing images and an 'Ricoh MP C5503 office printer' for printing, but it does not specify the hardware (e.g., CPU, GPU models, memory) used to generate adversarial examples or run the Inception v3 classifier for the main experiments.
Software Dependencies No The paper mentions 'Tensor Flow Camera Demo app' and 'Image Magick suite' for certain tasks but does not provide specific version numbers for these or other software dependencies crucial for reproducing the experiments (e.g., the deep learning framework or libraries used for attack generation).
Experiment Setup Yes In our experiments we used α = 1, i.e. we changed the value of each pixel only by 1 on each step. We selected the number of iterations to be min(ϵ + 4, 1.25ϵ). We limit all further experiments to ϵ 16 because such perturbations are only perceived as a small noise (if perceived at all), and adversarial methods are able to produce a significant number of misclassified examples in this ϵ-neighbourhood of clean images.