On the Convergence of an Adaptive Momentum Method for Adversarial Attacks
Authors: Sheng Long, Wei Tao, Shuohao LI, Jun Lei, Jun Zhang
AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experiments on multiple models demonstrate the efficacy of our method in generating adversarial examples with human-imperceptible noise while achieving high attack success rates, indicating its superiority over previous adversarial example generation methods. |
| Researcher Affiliation | Academia | 1Laboratory for Big Data and Decision, National University of Defense Technology, Changsha 410073, China 2Strategic Assessments and Consultation Institute, Academy of Military Science, Beijing 100091, China |
| Pseudocode | Yes | Algorithm 1: Adaptive Momentum and Step-size Iterate Fast Gradient Method (Ada MSI-FGM) |
| Open Source Code | No | The paper does not provide an explicit statement or link to the source code for the methodology described in this paper. |
| Open Datasets | Yes | Dataset. We randomly select 500 images from ILSVRC 2012 validation set. Models. We consider eight pre-trained models from the torchvision library (Paszke et al. 2019) on Image Net dataset. |
| Dataset Splits | No | The paper mentions using "ILSVRC 2012 validation set" as its dataset but does not provide specific details on how this dataset was split into training, validation, and test sets for their experiments, nor does it refer to standard splits for reproduction. |
| Hardware Specification | Yes | The experiments are conducted on a single NVIDIA Ge Force RTX 3060 GPU. |
| Software Dependencies | Yes | The software versions used are Ubuntu 18.04.1, Python 3.7.12, Py Torch 1.11.0, and Torchvision 0.12.0. |
| Experiment Setup | Yes | Hyper-Parameters. Although more iterations are favorable to convergence (Pintor et al. 2022) and targeted attacks (Zhao, Liu, and Larson 2021), the traditional iteration setting T = 10 is adopted since we focus on non-targeted transferable attacks. The maximum of L norm perturbation ϵ = 4/255, and the batch-size is set to 64 for all algorithms. For MI-FGSM and NI-FGSM, we adopt the default momentum parameter µ = 1 and step-size αT = 4/255/10. For PGD, the step-size αT = 4/255/10. For Ada MSI-FGM, we set αt 1/255/10, λ = 0.6, β2,t = 1 γ t where γ = 1, and ξt = δ t where δ = 1e 16. |