Making Substitute Models More Bayesian Can Enhance Transferability of Adversarial Examples
Authors: Qizhang Li, Yiwen Guo, Wangmeng Zuo, Hao Chen
ICLR 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Extensive experiments have been conducted to verify the effectiveness of our method, on common benchmark datasets, and the results demonstrate that our method outperforms recent state-of-the-arts by large margins (roughly 19% absolute increase in average attack success rate on Image Net), and, by combining with these recent methods, further performance gain can be obtained. |
| Researcher Affiliation | Collaboration | 1Harbin Institute of Technology, 2Tencent Security Big Data Lab, 3Independent Researcher, 4UC Davis |
| Pseudocode | No | The paper describes its methods mathematically and textually but does not include any pseudocode or algorithm blocks. |
| Open Source Code | Yes | Our code: https://github.com/qizhangli/More Bayesian-attack. |
| Open Datasets | Yes | Experiments on attacking a variety of CIFAR-10 (Krizhevsky & Hinton, 2009) and Image Net (Russakovsky et al., 2015) victim models have been performed |
| Dataset Splits | Yes | For CIFAR-10 tests, we performed attacks on all test data. For Image Net, we randomly sampled 5000 test images from a set of the validation data that could be classified correctly by these victim models, and we learned perturbations to these images, following prior work (Huang & Kong, 2022; Guo et al., 2020; 2022). |
| Hardware Specification | Yes | All experiments are performed on an NVIDIA V100 GPU. |
| Software Dependencies | No | The paper does not list specific version numbers for software dependencies such as Python, PyTorch, or CUDA. While an Appendix details settings for compared methods, it does not provide the paper's own software dependencies with versions. |
| Experiment Setup | Yes | In possible finetuning, we set γ = 0.1/ w 2 and a finetuning learning rate of 0.05 if SWAG was incorporated. We set a smaller finetuning learning rate of 0.001 if it was not. We use an SGD optimizer with a momentum of 0.9 and a weight decay of 0.0005 and finetune models for 10 epochs on both CIFAR-10 and Image Net. We set the batch size of 128 and 1024 on CIFAR-10 and Image Net, respectively. |