Learning to Learn Transferable Attack
Authors: Shuman Fang, Jie Li, Xianming Lin, Rongrong Ji571-579
AAAI 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Empirical results on the widely-used dataset demonstrate the effectiveness of our attack method with a 12.85% higher success rate of transfer attack comparing with the state-of-the-art methods. |
| Researcher Affiliation | Academia | 1MAC Lab, Department of Artificial Intelligence, School of Informatics, Xiamen University 2Peng Cheng Laboratory, Shenzhen, China |
| Pseudocode | Yes | Algorithm 1: Learning to Learn Transferable Attack |
| Open Source Code | No | The paper provides a link (https://github.com/JHL-HUST/SI-NI-FGSM) to a baseline method, but no explicit statement or link for the source code of the proposed LLTA method. |
| Open Datasets | Yes | Following most of the previous works, we report the results on the Image Net-compatible dataset in the NIPS 2017 adversarial competition (Kurakin et al. 2018), which contain 1, 000 categories and one image per category. We tune hyper-parameters on another 1, 000 images randomly chosen from Image Net validation set (Deng et al. 2009). |
| Dataset Splits | Yes | We tune hyper-parameters on another 1, 000 images randomly chosen from Image Net validation set (Deng et al. 2009). For our LLTA, we set the size of the support set as 20, the size of the query set as 10, and the number of meta iterations as 5, respectively. |
| Hardware Specification | No | No specific hardware details (e.g., GPU models, CPU types, or cloud instance types) used for running the experiments are mentioned in the paper. |
| Software Dependencies | No | The paper does not provide specific version numbers for any software dependencies or libraries used in the experiments. |
| Experiment Setup | Yes | We follow the attack setting in most of previous works (Kurakin et al. 2016; Dong et al. 2018, 2019) with the maximum ℓp-norm ϵ = 16, number of iteration T = 10, and step size α = ϵ/T = 1.6. We set other parameters following the original setting in baselines. For our LLTA, we set the size of the support set as 20, the size of the query set as 10, and the number of meta iterations as 5, respectively. For MGS used in LLTA, we set the number both of iteration and sampled update directions as 5. |