Parsimonious Black-Box Adversarial Attacks via Efficient Combinatorial Optimization

Authors: Seungyong Moon, Gaon An, Hyun Oh Song

ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments on Cifar-10 and Image Net show the state of the art black-box attack performance with significant reduction in the required queries compared to a number of recently proposed methods.
Researcher Affiliation Academia 1Department of Computer Science and Engineering, Seoul National University, Seoul, Korea 2Neural Processing Research Center.
Pseudocode Yes Algorithm 1 Lazy Greedy Insertion, Algorithm 2 Lazy Greedy Deletion, Algorithm 3 Accelerated Local Search w/ Lazy Evaluations, Algorithm 4 Split Block, Algorithm 5 Hierarchical Accelerated Local Search.
Open Source Code Yes The source code is available at https://github.com/snu-mllab/ parsimonious-blackbox-attack.
Open Datasets Yes Our results on Cifar-10 (Krizhevsky & Hinton, 2009) and Image Net (Russakovsky et al., 2015).
Dataset Splits Yes We then use 1,000 randomly selected images from the validation set that are initially correctly classified.
Hardware Specification No No specific hardware details (e.g., exact GPU/CPU models, processor types, or memory amounts) used for running the experiments are provided. The paper mentions using pre-trained models but not the hardware they ran on.
Software Dependencies No No specific software dependency versions (e.g., Python, TensorFlow, or PyTorch versions) are explicitly mentioned. The paper only mentions that a pretrained Inception v3 classifier was 'provided by Tensorflow'.
Experiment Setup Yes We set the initial block size to k = 4. On Image Net, we set the initial block size to k = 32. We set the maximum distortion of the adversarial image to ϵ = 8 in [0, 255] scale. We restrict the maximum number of queries to 20,000. We run 20 iterations of PGD with constant step size of 2.0.