Adversarial camera stickers: A physical camera-based attack on deep learning systems

Authors: Juncheng Li, Frank Schmidt, Zico Kolter

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We train the attack, and conducted evaluation experiments, both on a real camera using printed stickers, and (virtually, but with physically realistic perturbations) on the Image Net dataset (Deng et al., 2009). Our experiments show that on real video data with a physically manufactured sticker, we can achieve an average targeted fooling rate of 52% over 5 different class / target class combinations, and furthermore reduce the accuracy of the classifier to 27%.
Researcher Affiliation Collaboration 1Bosch Center for Artificial Intelligence 2School of Computer Science, Carnegie Mellon University, Pittsburgh, USA.
Pseudocode Yes Algorithm 1 Coordinate Descent: maximize the attack
Open Source Code No Our demo video can be viewed at: https://youtu.be/w UVm L33Fx54. The paper provides a link to a demo video, but not to the source code for the methodology described.
Open Datasets Yes We train the attack, and conducted evaluation experiments, both on a real camera using printed stickers, and (virtually, but with physically realistic perturbations) on the Image Net dataset (Deng et al., 2009).
Dataset Splits No The paper mentions using the Image Net dataset and its test set, and a subset of 1000 images for training, but it does not specify explicit training, validation, and test splits (e.g., percentages or counts) needed for reproduction.
Hardware Specification No The paper mentions using an 'HP Color Laser Jet M253' for printing stickers, but it does not specify the computing hardware (e.g., CPU, GPU models, memory) used for running the experimental computations (e.g., model training or inference).
Software Dependencies No The paper mentions using the 'Py Torch library' and a 'Res Net-50 classifier', but it does not provide specific version numbers for PyTorch or any other software dependencies, which are necessary for reproducibility.
Experiment Setup No The paper describes the optimization process, including coordinate descent and gradient descent, and specifies a 45x45 grid for dot placement. However, it does not provide concrete hyperparameter values such as learning rates, batch sizes, number of epochs, or specific optimizer configurations, which are crucial for reproducing the experimental setup.