Blurred-Dilated Method for Adversarial Attacks
Authors: Yang Deng, Weibin Wu, Jianping Zhang, Zibin Zheng
NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments on the Image Net dataset show that adversarial examples generated by BD achieve significantly higher transferability than the state-of-the-art baselines. |
| Researcher Affiliation | Academia | Yang Deng School of Software Engineering Sun Yat-sen University dengy73@mail2.sysu.edu.cn Weibin Wu School of Software Engineering Sun Yat-sen University wuwb36@mail.sysu.edu.cn Jianping Zhang Department of Computer Science and Engineering The Chinese University of Hong Kong jpzhang@cse.cuhk.edu.hk Zibin Zheng School of Software Engineering Sun Yat-sen University zhzibin@mail.sysu.edu.cn |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | No | The paper provides links to 'open-source pretrained models' from torchvision and GitHub repositories (footnotes 1, 2, 3) which are third-party resources used by the authors, but it does not include an explicit statement or link for the source code of the Blurred-Dilated (BD) method itself. |
| Open Datasets | Yes | Consistent with the previous works [18, 21], we use the Image Net-compatible dataset in the NIPS 2017 adversarial competition [32] as the test set to generate adversarial samples. After modifying source models with our BD, we fine-tune the modified source models with the Image Net training set [25] to recover their classification accuracy. ... Following previous efforts [15, 13], we choose the CIFAR-10 and CIFAR-100 datasets [24], which consist of 60000 images from 10 and 100 classes, respectively. |
| Dataset Splits | No | The paper mentions a training set and a test set for CIFAR-10/100 datasets ('officially divided into a training set of 50000 images and a test set of 10000 images') and uses the NIPS 2017 adversarial competition dataset as a test set for ImageNet, but it does not explicitly specify a validation dataset split or its size. |
| Hardware Specification | Yes | All experiments were performed with an NVIDIA V100 GPU. |
| Software Dependencies | No | The paper mentions using models collected from 'torchvision' (footnote 1) but does not provide specific version numbers for any software, libraries, or frameworks used in the experiments. |
| Experiment Setup | Yes | The step size α is set to 2/255 when ϵ = 16/255, and 1/255 when ϵ = 8/255 or 4/255. For MI-FGSM, we set the decay factor µ = 1.0. The iteration number T = 10. |