Query-Efficient Hard-label Black-box Attack: An Optimization-based Approach

Authors: Minhao Cheng, Thong Le, Pin-Yu Chen, Huan Zhang, JinFeng Yi, Cho-Jui Hsieh

ICLR 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We test the performance of our hard-label black-box attack algorithm on convolutional neural network (CNN) models and compare with Boundary attack (Brendel et al., 2017), Limited attack (Ilyas et al., 2018) and a random trail baseline.
Researcher Affiliation Collaboration Minhao Cheng, Huan Zhang & Cho-Jui Hsieh Department of Computer Science University of California, Los Angeles {mhcheng,huanzhang,chohsieh}@cs.ucla.edu Thong Le Department of Computer Science University of California, Davis thmle@ucdavis.edu Pin-Yu Chen IBM Research AI pin-yu.chen@ibm.com Jinfeng Yi JD AI Research yijinfeng@jd.com
Pseudocode Yes Algorithm 1 Compute g(θ) locally ... Algorithm 2 RGF for hard-label black-box attack
Open Source Code Yes All models are trained using Pytorch and our source code is publicly available2. 2https://github.com/Le Minh Thong/blackbox-attack
Open Datasets Yes We use three standard datasets: MNIST (Le Cun et al., 1998), CIFAR-10 (Krizhevsky, 2009) and Image Net-1000 (Deng et al., 2009). ... We use two standard datasets: HIGGS (Baldi et al., 2014) for binary classification and MNIST (Le Cun et al., 1998) for multi-class classification.
Dataset Splits Yes For all the cases except Limited-attack, we conduct adversarial attacks for randomly sampled N = 100 images from validation sets.
Hardware Specification No The paper mentions models are trained using Pytorch but does not provide specific details about the hardware (e.g., GPU/CPU models, memory) used for running the experiments.
Software Dependencies No The paper mentions 'Pytorch' for training and 'Light GBM framework' but does not specify exact version numbers for these or other software dependencies.
Experiment Setup Yes We set q = 20 in all the experiments. ... We set t = 100 in all the experiments. ... we set β = 0.005 in all our experiments. ... We also restrict the maximum number of queries to be 1,000,000 for all attacks.