LRS: Enhancing Adversarial Transferability through Lipschitz Regularized Surrogate

Authors: Tao Wu, Tie Luo, Donald C. Wunsch II

AAAI 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We evaluate our proposed LRS approach by attacking state-of-the-art standard deep neural networks and defense models. The results demonstrate significant improvement on the attack success rates and transferability.
Researcher Affiliation Academia Tao Wu1, Tie Luo1*, Donald C. Wunsch II2 1Department of Computer Science, Missouri University of Science and Technology 2Department of Electrical and Computer Engineering, Missouri University of Science and Technology {wuta, tluo, dwunsch}@mst.edu
Pseudocode Yes Algorithm 1: LRS-1 (using PGD as an example base)
Open Source Code Yes Our code is available at https://github.com/Trust AIo T/LRS.
Open Datasets Yes Dataset. We test untargeted ℓ black-box attacks on CIFAR-10 (Krizhevsky, Hinton et al. 2009) and Image Net (Russakovsky et al. 2015) datasets as the common benchmark (Dong et al. 2018, 2019; Guo, Li, and Chen 2020; Li et al. 2023).
Dataset Splits Yes For Image Net, we randomly sample 5,000 test images that are correctly classified by all the target models from the Image Net validation set.
Hardware Specification Yes All experiments are performed on an NVIDIA V100 GPU.
Software Dependencies No The paper mentions specific frameworks (e.g., PGD) and optimizers (SGD) but does not provide specific version numbers for any software libraries, programming languages, or environments (e.g., Python version, PyTorch version, CUDA version).
Experiment Setup Yes Implementation details on Image Net. For LRS-1 regularization, we set λ1 = 5.0, h1 = 0.01. For LRS-2 regularization, we set λ2 = 5.0, h2 = 1.5. When use LRS-F as regularization, we keep the same λ and h values. We use an SGD optimizer with momentum 0.9 and weight decay 0.0005, the learning rate is fixed at 0.001, and the surrogate model is run for 10 epochs which is a tradeoff between efficiency and efficacy. With PGD as the back-end method, we run it for 50 iterations on Image Net with perturbation range 8/255 and step size of 2/255.