Diffusion Models are Certifiably Robust Classifiers

Authors: Huanran Chen, Yinpeng Dong, Shitong Shao, Hao Zhongkai, Xiao Yang, Hang Su, Jun Zhu

NeurIPS 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental Experimental results show the superior certified robustness of these Noised Diffusion Classifiers (NDCs).
Researcher Affiliation Collaboration Huanran Chen1,2, Yinpeng Dong1,2, Shitong Shao1, Zhongkai Hao1, Xiao Yang1, Hang Su1,3, Jun Zhu1,2 1Dept. of Comp. Sci. and Tech., Institute for AI, Tsinghua-Bosch Joint ML Center, THBI Lab BNRist Center, Tsinghua University, Beijing, 100084, China 2Real AI 3 Zhongguancun Laboratory, Beijing, China
Pseudocode Yes Algorithm 1 EPNDC
Open Source Code Yes Code is available at https://github.com/huanranchen/Noised Diffusion Classifiers.
Open Datasets Yes Following previous studies [2, 47, 52], we evaluate the certified robustness of our method on two standard datasets, CIFAR-10 [19] and Image Net [37], selecting a subset of 512 images from each.
Dataset Splits No The paper only mentions that "we trained a model" or refers to "standard settings" without elaboration - this applies here since they use off-the-shelf models, and thus do not specify the training splits for these models. They only specify the test sets for evaluation.
Hardware Specification Yes translating to about 3 * 10^6 seconds for certifying each image on a single 3090 GPU.
Software Dependencies No The paper only mentions software names without version numbers (e.g., "using Caffe", "the scikit-learn package") - This paper does not mention software dependencies with versions.
Experiment Setup Yes Experimental settings. Due to computational constraints, we employ a sample size of N = 10, 000 to estimate p A. ... To make a fair comparison with previous studies, we also select στ {0.25, 0.5, 1.0} for certification (thus τ is determined) and use EDM [16] as our diffusion models.