(De)Randomized Smoothing for Certifiable Defense against Patch Attacks

Authors: Alexander Levine, Soheil Feizi

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Our method can be trained significantly faster, achieves high clean and certified robust accuracy on CIFAR-10, and provides certificates at Image Net scale. For example, for a 5 5 patch attack on CIFAR-10, our method achieves up to around 57.6% certified accuracy (with a classifier with around 83.8% clean accuracy), compared to at most 30.3% certified accuracy for the existing method (with a classifier with around 47.8% clean accuracy). Our results effectively establish a new state-of-the-art of certifiable defense against patch attacks on CIFAR-10 and Image Net.
Researcher Affiliation Academia Alexander Levine and Soheil Feizi Department of Computer Science University of Maryland College Park, MD 20742 {alevine0, sfeizi}@cs.umd.edu
Pseudocode No No structured pseudocode or algorithm blocks are present in the paper.
Open Source Code No The paper does not provide any explicit statement about open-sourcing the code or a link to a code repository.
Open Datasets Yes For example, for a 5 5 patch attack on CIFAR-10, our method achieves up to around 57.6% certified accuracy (with a classifier with around 83.8% clean accuracy)... Our results effectively establish a new state-of-the-art of certifiable defense against patch attacks on CIFAR-10 and Image Net.
Dataset Splits Yes Results in the figures are using a validation set of 5,000 images; the final results reported in Table 1 are on a separate test set of 5,000 images.
Hardware Specification Yes the training time for the reported best model was 8.4 GPU hours for MNIST, and 15.4 GPU hours for CIFAR-10, using NVIDIA 2080 Ti GPUs.
Software Dependencies No The paper does not provide specific software dependencies with version numbers.
Experiment Setup Yes Best certified accuracy is achieved using Column Smoothing for both datasets, with s = 2, θ = 0.3 for MNIST and s = 4, θ = 0.3 for CIFAR-10.