PID-Based Approach to Adversarial Attacks

Authors: Chen Wan, Biaohua Ye, Fangjun Huang10033-10040

AAAI 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Extensive experiments consistently demonstrate that our method can achieve higher attack success rates and exhibit better transferability compared with the state-of-the-art gradient-based adversarial attacks.
Researcher Affiliation Academia Chen Wan1, 2, Biaohua Ye1, 2, Fangjun Huang1, 2 1 School of Computer Science and Engineering, Sun Yat-Sen University, Guangzhou 510006, China 2 Guangdong Provincial Key Laboratory of Information Security Technology, Guangzhou 510006, China wanchen18@outlook.com, yebh3@mail2.sysu.edu.cn, huangfj@mail.sysu.edu.cn
Pseudocode Yes Algorithm 1 MID-FGSM Input: A clean example x with ground-truth label y; a classifier f with loss function J. Parameter: The size of perturbation ϵ; iterations T, decay factor µ, and control parameter kd. Output: Adversarial example xadv. 1: Let α = ϵ/T, xadv 0 = x, g0 = 0, D0 = 0; 2: for t = 0 to T 1 do 3: Get xb t by xb t = xadv t α Dt; 4: Input xb t to f and obtain gradient xb J(xb t, y); 5: Input xadv t to f and obtain gradient x J(xadv t , y); 6: Update Dt+1 by Eq. (14); 7: Update gt+1 by Eq. (12); 8: Update xadv t+1 by Eq. (13); 9: end for 10: return xadv = xadv T .
Open Source Code No The paper does not provide an explicit statement or link for the open-source code of the described methodology.
Open Datasets Yes In this section, we conduct extensive experiments on the Image Net validation set (Deng et al. 2009)
Dataset Splits No The test dataset consists of 10, 000 images (resized to 299 299 3) chosen randomly from the Image Net validation set, which are almost correctly classified by all the testing models described below.
Hardware Specification Yes All of our experiments are conducted on the Tensorflow DNN computing framework (Abadi et al. 2016) and run with four parallel NVIDIA Ge Force GTX 1080Ti GPUs.
Software Dependencies No All of our experiments are conducted on the Tensorflow DNN computing framework (Abadi et al. 2016)
Experiment Setup Yes In our experiments, the hyper-parameters, i.e., the maximum perturbations of each pixel, the number of iterations, the step size, and the default decay factor are set as ϵ = 16, T = 10, α = ϵ/T = 1.6, and µ = 1.0, respectively. The transformation probabilities of DI augmentation strategy are set as 0.5, the size of the Gaussian kernels in TI augmentation strategy are set as 15 15, and the numbers of scale copies in SI augmentation strategy are set as 5.