Diversified Adversarial Attacks based on Conjugate Gradient Method

Authors: Keiichiro Yamamura, Haruki Sato, Nariaki Tateiwa, Nozomi Hata, Toru Mitsutake, Issa Oe, Hiroki Ishikura, Katsuki Fujisawa

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental The results of large-scale evaluation experiments conducted on the latest robust models show that, for most models, ACG was able to find more adversarial examples with fewer iterations than the existing SOTA algorithm Auto-PGD (APGD). We compared our ACG with APGD, a SOTA white-box adversarial attack, on 64 robust models listed in Robust Bench (Croce et al., 2021).
Researcher Affiliation Collaboration 1Graduate School of Mathematics, Kyushu University, Fukuoka, Japan. 2Present affiliation is NTT Software Innovation Center, NTT Corporation. 3Institute of Mathematics for Industry, Kyushu University, Fukuoka, Japan.
Pseudocode Yes Algorithm 1 ACG
Open Source Code Yes Our code is available at the URL given below2. 2https://github.com/yamamura-k/ACG
Open Datasets Yes Models and Dataset: We used 64 models, i.e., 42, 17, and 5 models based on the CIFAR-10, CIFAR-100, and Image Net datasets, respectively.
Dataset Splits Yes From a validation dataset, we used 10,000 test images for the evaluation when applying the CIFAR-10 and CIFAR-100 datasets, and 5,000 images when using the Image Net dataset.
Hardware Specification Yes We compared the execution times of APGD and ACG with Intel(R) Xeon(R) Gold 6240R CPU and NVIDIA Ge Force RTX 3090 GPU. Table 4 shows that APGD with five restarts (APGD(5)) took 22m, 5.88s to attack (Ding et al., 2020), and ACG with five restarts (ACG(5)) took 21m, 15.67s in real time. In addition, APGD with one restart (APGD(1)) took 6m, 45.26s to attack (Ding et al., 2020), and ACG with one restart (ACG(1)) took 6m, 56.78s.
Software Dependencies No The paper discusses the use of deep learning models and attacks, but does not specify software dependencies with version numbers (e.g., PyTorch 1.x, TensorFlow 2.x, Python 3.x).
Experiment Setup Yes The initial step size η(0) is set to 2ε, and when the number of iterations reaches the precomputed checkpoint wj, the step size η is halved... The parameters for the step size selection ρ, checkpoints w, the number of iterations Niter, and the number of restarts were the same as in the study on APGD, i.e., ρ = 0.75, Niter = 100, and 5 restarts. The initial point of the first restart was the center of the feasible region, whereas the others were determined through sampling from a uniform distribution. For reproducibility of the results, the seed of the random numbers was fixed.