Improving Adversarial Robustness via Mutual Information Estimation

Authors: Dawei Zhou, Nannan Wang, Xinbo Gao, Bo Han, Xiaoyu Wang, Yibing Zhan, Tongliang Liu

ICML 2022 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirical evaluations demonstrate that our method could effectively improve the adversarial accuracy against multiple attacks. (From Abstract) and In this section, we first introduce the experiment setups including datasets, attack setting and defense setting in Section 4.1. Then, we show the effectiveness of our optimization mechanism for evaluating MI in Section 4.2. Next, we evaluate the performances of the proposed adversarial defense algorithm in Section 4.3. Finally, we conduct ablation studies in Section 4.4. (From Section 4: Experiments)
Researcher Affiliation Collaboration 1State Key Laboratory of Integrated Services Networks, School of Telecommunications Engineering, Xidian University 2Chongqing Key Laboratory of Image Cognition, Chongqing University of Posts and Telecommunications 3Department of Computer Science, Hong Kong Baptist University 4The Chinese University of Hong Kong (Shenzhen) 5JD Explore Academy 6TML Lab, Sydney AI Centre, The University of Sydney.
Pseudocode Yes Algorithm 1 Natural-adversarial mutual information-based defense (NAMID) algorithm (Section 3.3.2)
Open Source Code Yes The code is available at https://github.com/dwDavidxd/MIAT. (Section 3.3)
Open Datasets Yes We verify the effective of our defense algorithm on two popular benchmark datasets, i.e., CIFAR-10 (Krizhevsky et al., 2009) and Tiny-Image Net (Wu et al., 2017). (Section 4.1)
Dataset Splits Yes Tiny-Image Net has 200 classes of images including 100,000 training images, 10,000 validation images and 10,000 test images. (Section 4.1)
Hardware Specification No No specific hardware specifications (e.g., GPU models, CPU models, memory) used for running experiments are mentioned in the paper.
Software Dependencies No The paper does not provide specific software dependency versions (e.g., Python, PyTorch, TensorFlow versions or other library versions).
Experiment Setup Yes The iteration number of PGD and FWA is set to 40 with step size ϵ/4. The iteration number of CW2 and DDN are set to 20 respectively with step size 0.01. For CIFAR-10 and Tiny-Image Net, the perturbation budgets for L2-norm attacks and L -norm attacks are ϵ = 0.5 and 8/255 respectively... The epoch number is set to 100. For fair comparisons, all the methods are trained using SGD with momentum 0.9, weight decay 2 10 4, batch-size 1024 and an initial learning rate of 0.1, which is divided by 10 at the 75-th and 90-th epoch. In addition, we adjust the hyperparameter settings of the defense methods so that the natural accuracy is not severely compromised and then compare the adversarial accuracy. We set α = 5, λ = 0.1 for our algorithm. (Section 4.1)