Preprocessors Matter! Realistic Decision-Based Attacks on Machine Learning Systems
Authors: Chawin Sitawarin, Florian Tramèr, Nicholas Carlini
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We evaluate our attacks on a Res Net-18 (He et al., 2016) trained on the Image Net dataset (Deng et al., 2009). ... We report the average perturbation size (ℓ2-norm) of adversarial examples found by each attack referred to as the adversarial distance in short. Smaller adversarial distance means a stronger attack. |
| Researcher Affiliation | Collaboration | Chawin Sitawarin 1 Florian Tram er 2 Nicholas Carlini 3 1Department of Computer Science, University of California, Berkeley, USA. Work partially done while the author was at Google. 2ETH Z urich, Z urich, Switzerland. 3Google Deep Mind, Mountain View, USA. |
| Pseudocode | Yes | Algorithm 1 Outline of Bypassing Attack. This example is built on top of a gradient-approximation-based attack algorithm (e.g., HSJA, QEBA), but it is compatible with any black-box attack. ... Algorithm 2 Outline of Biased-Gradient Attack built on top of gradient-approximation-based attack algorithm. |
| Open Source Code | Yes | The code can be found at https: //github.com/google-research/ preprocessor-aware-black-box-attack. |
| Open Datasets | Yes | We evaluate our attacks on a Res Net-18 (He et al., 2016) trained on the Image Net dataset (Deng et al., 2009). |
| Dataset Splits | No | The paper mentions using a pre-trained Res Net-18 model trained on the Image Net dataset and evaluating on 1,000 random test samples, but it does not explicitly specify the training, validation, or test dataset splits used for the model's original training or for their own experimental setup beyond the test set. |
| Hardware Specification | Yes | The experiments are run on multiple remote servers with either Nvidia Tesla A100 40GB or Nvidia V100 GPUs. |
| Software Dependencies | No | Implementations of Boundary Attack and HSJA are taken from the Foolbox package (Rauber et al., 2017). ... For Sign-OPT Attack and QEBA, we use the official, publicly available implementation. ... This model is publicly available in the popular timm package (Wightman, 2019). ... which is implemented in Py Torch. (Specific version numbers for key software like PyTorch are not provided, only package names and references or a commit hash for Foolbox). |
| Experiment Setup | Yes | Unless stated otherwise, all the attacks use 5,000 queries per one test sample. ... We thus sweep hyperparameters for all attacks and report results for the best choice. ... Appendix A contains full detail of all our experiments. ... For Boundary attack, we sweep the two choices of step size... For Sign-OPT attack, we consider the update step size α and the gradient estimate step size β. ... For HSJA, we tune the update step size γ... Lastly, we search the ratio r that controls the latent dimension that QEBA samples its random noise from for gradient approximation. |