Multi-scale Diffusion Denoised Smoothing

Authors: Jongheon Jeong, Jinwoo Shin

NeurIPS 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our experiments show that the proposed multi-scale smoothing scheme, combined with diffusion fine-tuning, not only allows strong certified robustness at high noise scales but also maintains accuracy close to non-smoothed classifiers. ... In our experiments, we evaluate our proposed schemes on CIFAR-10 [39] and Image Net [56], two of standard benchmarks for certified ℓ2-robustness...
Researcher Affiliation Academia Jongheon Jeong Jinwoo Shin Korea Advanced Institute of Science and Technology (KAIST) Daejeon, South Korea {jongheonj, jinwoos}@kaist.ac.kr
Pseudocode Yes Algorithm 1 Focal smoothing: A grid-search based optimization of (22)
Open Source Code Yes Code is available at https://github.com/jh-jeong/smoothing-multiscale.
Open Datasets Yes We evaluate our proposed schemes on CIFAR-10 [39] and Image Net [56]: two standard datasets for an evaluation of certified ℓ2-robustness. ... CIFAR-10 [39] consist of 60,000 images of size 32 × 32 pixels, 50,000 for training and 10,000 for testing. ... The full dataset can be downloaded at https://www.cs.toronto.edu/~kriz/cifar.html. ... Image Net [56], also known as ILSVRC 2012 classification dataset, consists of 1.2 million high-resolution training images and 50,000 validation images... A link for downloading the full dataset can be found in http://image-net.org/download.
Dataset Splits Yes CIFAR-10 [39] consist of 60,000 images of size 32 × 32 pixels, 50,000 for training and 10,000 for testing. ... Image Net [56], also known as ILSVRC 2012 classification dataset, consists of 1.2 million high-resolution training images and 50,000 validation images...
Hardware Specification Yes Overall, we conduct our experiments with a cluster of 8 NVIDIA V100 32GB GPUs and 8 instances of a single NVIDIA A100 80GB GPU. All the CIFAR-10 experiments are run on a single NVIDIA A100 80GB GPU, including both the diffusion fine-tuning and the smoothed inference procedures. For the Image Net experiments, we use 8 NVIDIA V100 32GB GPUs per run.
Software Dependencies No The paper refers to third-party codebases for training configurations (e.g., FT-CLIP [19], improved-diffusion) and mentions using 'statsmodels library' but does not provide a list of specific software dependencies with version numbers for its own implementation.
Experiment Setup Yes Unless otherwise noted, we use p0 = 0.5 for cascaded smoothing throughout our experiments. We mainly consider two configurations of cascaded smoothing: (a) σ ∈ {0.25, 0.50, 1.00}, and (b) σ ∈ {0.25, 0.50}.... For diffusion calibration, on the other hand, we use α = 1.0 by default. We use λ = 0.01 on CIFAR-10 and λ = 0.005 on Image Net... Throughout our experiments, we use n = 10,000 noise samples to certify robustness for both CIFAR-10 and Image Net. We follow [15] for the other hyperparameters to run CERTIFY, namely by n0 = 100, and α = 0.001.