Enhancing Adversarial Robustness via Score-Based Optimization

Authors: Boya Zhang, Weijian Luo, Zhihua Zhang

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We conduct comprehensive experiments on multiple datasets, including CIFAR10, CIFAR100 and Image Net. Our experimental results demonstrate that our approach outperforms existing adversarial defenses in terms of both robustness performance and inference speed.
Researcher Affiliation Academia Academy for Advanced Interdisciplinary Studies; Peking University; zhangboya@pku.edu.cn; School of Mathematical Sciences; Peking University; luoweijian@stu.pku.edu.cn; School of Mathematical Sciences; Peking University; zhzhang@math.pku.edu.cn;
Pseudocode Yes Algorithm 1: (Score Opt-O) Optimizing adversarial sample towards robustness with score-based prior. and Algorithm 2: (Score Opt-N) Optimizing noisy adversarial samples and one-shot denoising.
Open Source Code Yes Code is available at https://github.com/zzzhangboya/Score Opt.git.
Open Datasets Yes Three datasets are considered in our experiments: CIFAR10, CIFAR100, and Image Net. CIFAR-10 and CIFAR-100 datasets contain 50,000 training images and 10,000 test images, with 10 and 100 classes respectively. On the other hand, Image Net consists of a validation set with 50,000 examples, featuring 1,000 classes and images with a resolution of 256x256 pixels and three color channels.
Dataset Splits Yes Image Net consists of a validation set with 50,000 examples, featuring 1,000 classes and images with a resolution of 256x256 pixels and three color channels.
Hardware Specification Yes All of our experiments are conducted using GPUs. Specifically, the diffusion models and base classifiers are trained in parallel on eight GPUs. Each test-time robustness evaluation is performed on a single GPU. The GPUs used in our experiments are NVIDIA TITAN RTX with 24GB of memory.
Software Dependencies No The paper mentions software components such as 'Adam optimizer', 'U-net architecture', and 'ddpm++ model architecture' but does not provide specific version numbers for these or other software dependencies.
Experiment Setup Yes Table 10: Hyper-parameters choices of our optimization process for experimental results in the main text and appendix. (This table lists LR, Step, and Noise Level for various attacks and datasets). Also, 'The step size α is set to ϵ/4.'