A Few Seconds Can Change Everything: Fast Decision-based Attacks against DNNs
Authors: Ningping Mou, Baolin Zheng, Qian Wang, Yunjie Ge, Binqing Guo
IJCAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on three datasets demonstrate that Fast Drop can escape the detection of the stateof-the-art (SOTA) black-box defenses and reduce the number of queries by 13 133 under the same level of perturbations compared with the SOTA attacks. |
| Researcher Affiliation | Academia | Wuhan University, Wuhan, China {ningpingmou, baolinzheng, qianwang, yunjiege, binqingguo}@whu.edu.cn |
| Pseudocode | Yes | Algorithm 1 Orderly Frequency Dropping Input: Original input (x, y), target model f Output: Adversarial input (x , y ) 1: Famplitude, Fphrase FFT(x). 2: {b1, b2, ..., bn} split Famplitude into blocks. 3: {s1, s2, ..., sn} Sort({b1, b2, ..., bn}). 4: for i = 1 : n do 5: si 0. 6: x IFFT(Famplitude, Fphrase). 7: if f(x ) = y then 8: x x . 9: break. 10: end if 11: end for 12: return x |
| Open Source Code | No | The paper does not provide an explicit statement about the release of its own source code, nor does it include a link to a code repository for the methodology described. |
| Open Datasets | Yes | We use commonly-used Image Net [Deng et al., 2009], Flowers-102 [Nilsback and Zisserman, 2008], and STL-10 [Coates et al., 2011] as our datasets. |
| Dataset Splits | No | The paper mentions Image Net, Flowers-102, and STL-10 as datasets and states that 1000 images are randomly selected from each class. However, it does not specify explicit train/validation/test splits for their experiments. |
| Hardware Specification | Yes | Moreover, when conducting experiments of Res Net50 on a Ge Force RTX 3080, BOA-1000 needs 207.15s to finish an attack of an image, while Fast Drop only needs 0.90s. |
| Software Dependencies | No | The paper mentions general tools and models like ResNet50 and MobileNetV3 (implying deep learning frameworks like PyTorch or TensorFlow), but does not provide specific version numbers for any software dependencies. |
| Experiment Setup | Yes | To make fair comparisons, we carefully tune the hyper-parameters of these methods to achieve better results, and show the least queries under the l2 constraint. The paper also includes Appendix A titled "Hyperparameter Analysis" which details analysis of the "order of the sorted blocks", "non-zero modification", and "threshold of OFD" for Fast Drop. |