Improved, Deterministic Smoothing for L_1 Certified Robustness
Authors: Alexander J Levine, Soheil Feizi
ICML 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | On CIFAR-10 and Image Net datasets, we provide substantially larger ℓ1 robustness certificates compared to prior works, establishing a new state-ofthe-art. The determinism of our method also leads to significantly faster certificate computation. Code is available at: https://github.com/ alevine0/smoothing Splitting Noise. ... We evaluated the performance of our method on CIFAR-10 and Image Net datasets, matching all experimental conditions from (Yang et al., 2020) as closely as possible (further details are given in the appendix.) Certification performance data is given in Table 1 for CIFAR-10 and Figure 7 for Imagenet. |
| Researcher Affiliation | Academia | Alexander Levine 1 Soheil Feizi 1 ... 1Department of Computer Science, University of Maryland, College Park, Maryland, USA. |
| Pseudocode | No | The paper does not contain structured pseudocode or algorithm blocks. |
| Open Source Code | Yes | Code is available at: https://github.com/ alevine0/smoothing Splitting Noise. |
| Open Datasets | Yes | On CIFAR-10 and Image Net datasets, we provide substantially larger ℓ1 robustness certificates compared to prior works |
| Dataset Splits | No | The paper mentions using CIFAR-10 and ImageNet datasets and matching experimental conditions from previous work, but it does not explicitly provide specific training, validation, or test dataset split percentages or sample counts within the main text. |
| Hardware Specification | Yes | We used a single NVIDIA 2080 Ti GPU. |
| Software Dependencies | No | The paper does not provide specific software details, such as library or solver names with version numbers. |
| Experiment Setup | No | The paper mentions matching experimental conditions from a previous work (Yang et al., 2020) and that further details are in the appendix, but it does not provide concrete hyperparameter values or detailed training configurations within the main text. |