Black-Box Certification with Randomized Smoothing: A Functional Optimization Based Framework

Authors: Dinghuai Zhang, Mao Ye, Chengyue Gong, Zhanxing Zhu, Qiang Liu

NeurIPS 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental 5 Experiments We evaluate proposed certification bound and smoothing distributions for ℓ1, ℓ2 and ℓ attacks. We compare with the randomized smoothing method of [14] with Laplacian smoothing for ℓ1 region cerification. For ℓ2 and ℓ cases, we regard the method derived by [9] with Gaussian smoothing distribution as the baseline. For fair comparisons, we use the same model architecture and pretrained models provided by [14], [9] and [10], which are Res Net-110 for CIFAR-10 and Res Net-50 for Image Net. We use the official code2 provided by [9] for all the following experiments.
Researcher Affiliation Academia Dinghuai Zhang Mila dinghuai.zhang@mila.quebec Mao Ye , Chengyue Gong Department of Computer Science University of Texas at Austin {my21, cygong}@cs.utexas.edu Zhanxing Zhu School of Mathematical Sciences Peking University zhanxing.zhu@pku.edu.cn Qiang Liu Department of Computer Science University of Texas at Austin lqiang@cs.utexas.edu
Pseudocode No The paper describes computational methods but does not include any clearly labeled pseudocode or algorithm blocks.
Open Source Code No The paper states: 'We use the official code2 provided by [9] for all the following experiments.' Footnote 2 links to 'https://github.com/locuslab/smoothing'. This refers to code from a previous work, not the authors' own implementation of their proposed framework and distributions.
Open Datasets Yes Empirical results show that our new framework and smoothing distributions outperform existing approaches for ℓ1, ℓ2 and ℓ attacking, on datasets such as CIFAR-10 and Image Net.
Dataset Splits No The paper mentions using 'Res Net-110 for CIFAR-10 and Res Net-50 for Image Net' and that 'We use the official code provided by [9] for all the following experiments.' It does not explicitly specify the training, validation, or test dataset splits used in its own experiments.
Hardware Specification No The paper does not provide specific details about the hardware (e.g., GPU models, CPU types, or cloud instances) used for running the experiments.
Software Dependencies No The paper mentions using pre-trained models and official code from prior works but does not list specific software dependencies with version numbers (e.g., Python, PyTorch, TensorFlow versions).
Experiment Setup No The paper states: 'For all other details and parameter settings, we refer the readers to Appendix B.2.' indicating that specific experimental setup details are not provided in the main text.