PopSkipJump: Decision-Based Attack for Probabilistic Classifiers

Authors: Carl-Johann Simon-Gabriel, Noman Ahmed Sheikh, Andreas Krause

ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test our attack on various noise models, including stateof-the-art off-the-shelf randomized defenses, and show that they offer almost no extra robustness to decision-based attacks. Code is available at https://github.com/cjsg/Pop Skip Jump. and 4. Experiments The goal of our experiments is to verify points 1. to 4. from the introduction.
Researcher Affiliation Academia 1ETH Z urich. Correspondence to: CJSG <cjsg@ethz.ch>.
Pseudocode Yes Algorithm 1 Pop Skip Jump and Algorithm 2 Noisy Bin Search
Open Source Code Yes Code is available at https://github.com/cjsg/Pop Skip Jump.
Open Datasets Yes We ran all experiments on the MNIST (Le Cun et al., 1998) and CIFAR10 (Krizhevsky, 2009) image datasets.
Dataset Splits No The paper mentions using subsets of MNIST and CIFAR10 test sets but does not provide specific train/validation/test splits or cross-validation details.
Hardware Specification Yes it could take a minute per attack on a Ge Force GTX 1080 for MNIST and a few minutes for CIFAR10
Software Dependencies No The paper describes the software environment (e.g., neural networks on datasets) but does not list specific software dependencies with version numbers.
Experiment Setup Yes Input: attacked point x ; starting point x0 from adversarial class; probabilistic classifier φ; input dim d; HSJ parameters: sampling sizes ndet t , sampling radii δdet t , min bin-sizes θdet t and gradient step sizes ξdet t . and apply dropout with a uniform dropout rate α [0, 1] and add centered Gaussian noise with standard deviation σ to every input.