Phase-aware Adversarial Defense for Improving Adversarial Robustness
Authors: Dawei Zhou, Nannan Wang, Heng Yang, Xinbo Gao, Tongliang Liu
ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimental results show that the proposed method can significantly improve the robust accuracy against multiple attacks and even adaptive attacks. In addition, ablation studies demonstrate the effectiveness of our defense strategy. |
| Researcher Affiliation | Collaboration | 1School of Telecommunications Engineering, State Key Laboratory of Integrated Services Networks, Xidian University, Xian, Shaanxi, China 2Shenzhen Ai Mall Tech, Shenzhen, China 3Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications, Chongqing, China 4Mohamed bin Zayed University of Artificial Intelligence, Masdar City, Abu Dhabi, United Arab Emirates 5University of Sydney, Darlington, NSW, Australia. |
| Pseudocode | Yes | Algorithm 1 Phase-aware Adversarial Defense (PAD). |
| Open Source Code | Yes | The code can be found in https://github.com/dwDavidxd/PAD. |
| Open Datasets | Yes | We use two classic datasets CIFAR-10 (Krizhevsky et al., 2009) and Mini-Image Net (Vinyals et al., 2016) to evaluate the effectiveness of our method. |
| Dataset Splits | No | The paper states that CIFAR-10 and Mini-Image Net have 50,000 training images and 10,000 test images, but does not explicitly mention the size or split for a validation set. |
| Hardware Specification | No | The paper does not provide specific details about the hardware used for running the experiments, such as GPU models, CPU types, or cloud instance specifications. |
| Software Dependencies | No | The paper mentions using SGD and AdaIN, but does not provide specific version numbers for any software dependencies or libraries used in the implementation. |
| Experiment Setup | Yes | For adversarial training, the initial learning rate is set to 2e-1 corresponding to the batch size 256 according to work in Pang et al. (2020), and is divided by 10 at the 75-th and 90-th epoch. The weight decay is 2e-4 for CIFAR-10, and is 5e-4 for Mini Image Net. The epoch number is set to 91 by using the early-stopping strategy (Rice et al., 2020). |