Random Noise Defense Against Query-Based Black-Box Attacks
Authors: Zeyu Qin, Yanbo Fan, Hongyuan Zha, Baoyuan Wu
NeurIPS 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on CIFAR-10 and Image Net verify our theoretical findings and the effectiveness of RND and RND-GF. |
| Researcher Affiliation | Collaboration | 1School of Data Science, Shenzhen Research Institute of Big Data, The Chinese University of Hong Kong, Shenzhen 2Tencent AI Lab |
| Pseudocode | No | The paper does not contain any structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Yes, we provide our code in the Supplemental Materials. |
| Open Datasets | Yes | We conduct experiments on two widely used benchmark datasets in adversarial machine learning: CIFAR-10 [28] and Image Net [14]. |
| Dataset Splits | Yes | Following [26], we evaluate all the attack methods on the whole test set of CIFAR-10 and 1,000 random sampled images from the validation set of Image Net. |
| Hardware Specification | No | The paper states 'See Supplementary Materials' for hardware details, but these materials are not provided in the main paper. No specific hardware details are present in the main text. |
| Software Dependencies | No | The paper states 'See Section 5.1 and Supplementary Materials' for training details, but no specific software dependencies with version numbers are listed in the main text. Supplementary materials are not provided. |
| Experiment Setup | Yes | The perturbation budget of ℓ is set to 0.05 for both datasets. For ℓ2 attack, the perturbation budget is set to 1 and 5 on CIFAR-10 and Image Net, respectively. The number of maximal queries is set to 10,000. We adopt the cyclic learning rate [44] to achieve superconvergence in 50 epochs. |