Improving l1-Certified Robustness via Randomized Smoothing by Leveraging Box Constraints

Authors: Vaclav Voracek, Matthias Hein

ICML 2023 | Conference PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Experimental We performed an extensive evaluation on CIFAR10 (Krizhevsky et al., 2009) and Image Net-1k (Russakovsky et al., 2015)
Researcher Affiliation Academia V aclav Vor aˇcek 1 Matthias Hein 1 1 T ubingen AI Center, University of T ubingen. Correspondence to: V aclav Vor aˇcek <vaclav.voracek@uni-tuebingen.de>, Matthias Hein <matthias.hein@uni-tuebingen.de>.
Pseudocode Yes Algorithm 1 Randomized Smoothing Certification
Open Source Code Yes Code is released at https://github.com/vvoracek/L1-smoothing.
Open Datasets Yes We performed an extensive evaluation on CIFAR10 (Krizhevsky et al., 2009) and Image Net-1k (Russakovsky et al., 2015)
Dataset Splits Yes For CIFAR-10 dataset we certify 2 000 images from the test set, while for Image Net we certify the same subset of 500 images as Cohen et al. (2019) and Levine & Feizi (2021).
Hardware Specification No No specific hardware details (like GPU models, CPU types, or memory) used for running the experiments are mentioned in the paper.
Software Dependencies No The paper mentions 'Python function proportion confint from package statsmodels.stats.proportion' but does not provide specific version numbers for this or any other software dependency.
Experiment Setup Yes For Image Net we used a Res Net-50 trained for 30 epochs; and for CIFAR-10 we used a Wide Res Net-40-2 trained for 120 epochs. The optimizer is SGD with learning rate 0.1, Nesterov momentum 0.9 and weight decay 0.0001 with cosine annealing learning rate schedule and batch size is 64 for both models.