Black-Box Adversarial Attack with Transferable Model-based Embedding

Authors: Zhichao Huang, Tong Zhang

ICLR 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our approach on MNIST, Image Net and Google Cloud Vision API, resulting in a significant reduction on the number of queries. We also attack adversarially defended networks on CIFAR10 and Image Net, where our method not only reduces the number of queries, but also improves the attack success rate. 4 EXPERIMENTS We evaluated the number of queries versus success rate of TREMBA on undefended network in two datasets: MNIST (Le Cun et al., 1998) and Image Net (Russakovsky et al., 2015). Moreover, we evaluated the efficiency of our method on adversarially defended networks in CIFAR10 (Krizhevsky & Hinton, 2009) and Image Net.
Researcher Affiliation Academia Zhichao Huang, Tong Zhang The Hong Kong University of Science and Technology zhuangbx@connect.ust.hk, tongzhang@tongzhang-ml.org
Pseudocode Yes Algorithm 1 Black-Box adversarial attack on the embedding space
Open Source Code Yes Our code is available at https://github.com/Trans Embed BA/TREMBA
Open Datasets Yes We evaluated the number of queries versus success rate of TREMBA on undefended network in two datasets: MNIST (Le Cun et al., 1998) and Image Net (Russakovsky et al., 2015). Moreover, we evaluated the efficiency of our method on adversarially defended networks in CIFAR10 (Krizhevsky & Hinton, 2009) and Image Net.
Dataset Splits Yes We randomly divided the Image Net validation set into two parts, containing 49000 and 1000 images respectively. The first part was used as the training data for the generator G, and the second part was used for evaluating the attacks. Each attack was tested on images from the MNIST test set.
Hardware Specification Yes All the experiments were performed using pytorch on NVIDIA RTX 2080Ti.
Software Dependencies No The paper mentions "pytorch" as the framework used for experiments but does not provide specific version numbers for PyTorch or any other software dependencies.
Experiment Setup Yes C HYPERPARAMETERS Table 14 to 19 list the hyperparameters for all the algorithms. The learning rate was fine-tuned for all the algorithms. We set sample size b = 20 for all the algorithms for fair comparisons.