Progressive-Scale Boundary Blackbox Attack via Projective Gradient Estimation
Authors: Jiawei Zhang, Linyi Li, Huichen Li, Xiaolu Zhang, Shuang Yang, Bo Li
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on MNIST, CIFAR-10, Celeb A, and Image Net against different models including a real-world face recognition API show that PSBA-PGAN significantly outperforms existing baseline attacks in terms of query efficiency and attack success rate. |
| Researcher Affiliation | Collaboration | 1Zhejiang University, China (work done during remote internship at UIUC) 2UIUC, USA 3Ant Financial, China 4Alibaba Group US, USA. |
| Pseudocode | Yes | Detailed pseudocode can be found in Appendix D.4. |
| Open Source Code | Yes | The code is publicly available at https://github.com/ AI-secure/PSBA. |
| Open Datasets | Yes | Extensive experiments on MNIST (Le Cun et al., 1998), CIFAR-10 (Krizhevsky et al., 2009), Celeb A (Liu et al., 2015), and Image Net (Deng et al., 2009) against different models including a real-world face recognition API show that PSBA-PGAN significantly outperforms existing baseline attacks in terms of query efficiency and attack success rate. |
| Dataset Splits | Yes | We select the optimal scale for projection subspace based on a validation set. ... use an additional validation set of ten images to search for the optimal scale. |
| Hardware Specification | No | The paper does not provide specific hardware details (e.g., GPU/CPU models, memory amounts) used for running experiments. |
| Software Dependencies | No | The paper mentions PyTorch (PyTorch. Torchvision.models. https://pytorch.org/docs/stable/torchvision/models.html, 2020) but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | The PGAN training details could be found in Appendix D.2 and reference models performance are shown in Appendix D.3. For simplicity, we will denote PGAN28 as the attack using the output of PGAN with scale 28 × 28. |