Diffusion Models Demand Contrastive Guidance for Adversarial Purification to Advance

Authors: Mingyuan Bai, Wei Huang, Tenghui Li, Andong Wang, Junbin Gao, Cesar F Caiafa, Qibin Zhao

ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Empirically, extensive experiments on CIFAR-10, CIFAR-100, the German Traffic Sign Recognition Benchmark and Image Net datasets with Res Net and Wide Res Net classifiers show that our method outperforms most of current adversarial training and adversarial purification methods by a large improvement.
Researcher Affiliation Academia 1Tensor Learning Team, Center of Advanced Intelligence Project, RIKEN, Tokyo, 1030027, JAPAN 2Deep Learning Theory Team, Center of Advanced Intelligence Project, RIKEN, Tokyo, 1030027, JAPAN 3School of Automation, Guangdong University of Technology, Guangzhou, 510006, CHINA 4Key Laboratory of Intelligent Detection and the Internet of Things in Manufacturing, Ministry of Education, Guangzhou, 510006, CHINA 5Discipline of Business Analytics, The University of Sydney Business School, The University of Sydney, Darlington, NSW, 2006, AUSTRALIA 6Instituto Argentino de Radioastronom ıa, CONICET CCT La Plata/CIC-PBA/UNLP, V. Elisa, 1894, ARGENTINA.
Pseudocode Yes Algorithm 1 Adversarial Purification in Contrastive Guided Diffusion Models
Open Source Code Yes The code is available at https://github.com/tenghuilee/ Contrast Diff Purification.
Open Datasets Yes We investigate the empirical performance of Contrastive Guided Diffusion Model for Adversarial Purification on four benchmark datasets: CIFAR-10, CIFAR-100, the German Traffic Sign Recognition Benchmark (GTSRB) (Houben et al., 2013) and Image Net datasets.
Dataset Splits Yes For the CIFAR-10 dataset (Krizhevsky et al., 2009), we follow Nie et al. (2022) s settings to select data for evaluation. Note that because of the computational power constraint, we present the results from the randomly selected subsets of the dataset in Section 5.2. We also use the same setting on the CIFAR-100 (Krizhevsky et al., 2009) dataset for evaluation. The GTSRB dataset contains 39,252 training images in 43 classes and 12,629 images for testing...
Hardware Specification No The paper does not provide specific details on the hardware (e.g., GPU models, CPU types, or memory specifications) used for running the experiments.
Software Dependencies No The paper does not provide specific version numbers for software dependencies such as programming languages, libraries, or frameworks used in the experiments.
Experiment Setup Yes For the CIFAR-10 dataset... We test the performance of all the baselines and Contrastive Guided Diffusion Model for Purification on Wide Res Net-28-10, Wide Res Net-70-16 and Res Net-50... We present our experimental results on strong adaptive attacks in this section, where Auto Attack ℓ and ℓ2 threat models are applied. In specific, we present our experimental results on strong adaptive attacks in this section, where Auto Attack ℓ and ℓ2 threat models are applied... (t = 0.1 for diffusion models). λ is the hyperparameter representing the strength of guidance. τ is the temperature which is a hyperparameter.