Evading Adversarial Example Detection Defenses with Orthogonal Projected Gradient Descent

Authors: Oliver Bryniarski, Nabeel Hingun, Pedro Pachuca, Vincent Wang, Nicholas Carlini

ICLR 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We use our technique to evade four stateof-the-art detection defenses, reducing their accuracy to 0% while maintaining a 0% detection rate. ... Datasets. We attack each defense on the dataset that it performs best on. All of our defenses operate on images. For three of these defenses, this is the CIFAR-10 dataset (KH09), and for one, it is the Image Net dataset (DDS+09).
Researcher Affiliation Collaboration Oliver Bryniarski UC Berkeley Nabeel Hingun UC Berkeley Pedro Pachuca UC Berkeley Vincent Wang UC Berkeley Nicholas Carlini Google
Pseudocode No The paper does not contain any structured pseudocode or algorithm blocks.
Open Source Code No All of the code we used to generate our results will be made open source in a Git Hub repository.
Open Datasets Yes For three of these defenses, this is the CIFAR-10 dataset (KH09), and for one, it is the Image Net dataset (DDS+09).
Dataset Splits No The paper uses standard datasets (CIFAR-10, ImageNet) which have predefined splits, but it does not explicitly state the training, validation, and test dataset splits used for its own experiments in terms of percentages or counts.
Hardware Specification No We perform all evaluations on a single GPU.
Software Dependencies No The paper mentions converting models to PyTorch and reimplementing a defense in PyTorch and Matlab, but it does not specify version numbers for any software dependencies.
Experiment Setup Yes Attack Hyperparameters. We use the same hyperparmaeter setting for all attacks shown below. We set the distortion bound ε to 0.01 and .031;... We run our attack for N = 1000 iterations of gradient descent with a step size α = ε 10...