Et Tu Certifications: Robustness Certificates Yield Better Adversarial Examples
Authors: Andrew Craig Cullen, Shijie Liu, Paul Montague, Sarah Monazam Erfani, Benjamin I. P. Rubinstein
ICML 2024 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | To demonstrate the performance of our new Certification Aware Attack, we test our attack relative to a range of other comparable approaches. We emphasise that both our new attack and the reference attacks are deployed against certified models, rather than the associated base classifiers. To achieve this, our experiments consider attacks against MNIST (Le Cun et al., 1998) (GNU v3.0 license), CIFAR10 (Krizhevsky et al., 2009) (MIT license), and the Large Scale Visual Recognition Challenge variant of Image Net (Deng et al., 2009; Russakovsky et al., 2015) (which uses a custom, non-commercial license). |
| Researcher Affiliation | Collaboration | 1School of Computing and Information Systems, University of Melbourne, Parkville, Australia 2Defence Science and Technology Group, Adelaide, Australia. |
| Pseudocode | Yes | Algorithms detailing the aforementioned processes can be found within Appendices B and C, and the code associated with this work can be found at https://github.com/andrew-cullen/Attacking-Certified-Robustness. ... Algorithm 1 Certification Aware Attack Algorithm. ... Algorithm 2 Class prediction and certification for the Certification Aware Attack algorithm of Algorithm 1. |
| Open Source Code | Yes | Algorithms detailing the aforementioned processes can be found within Appendices B and C, and the code associated with this work can be found at https://github.com/andrew-cullen/Attacking-Certified-Robustness. |
| Open Datasets | Yes | To achieve this, our experiments consider attacks against MNIST (Le Cun et al., 1998) (GNU v3.0 license), CIFAR10 (Krizhevsky et al., 2009) (MIT license), and the Large Scale Visual Recognition Challenge variant of Image Net (Deng et al., 2009; Russakovsky et al., 2015) (which uses a custom, non-commercial license). |
| Dataset Splits | No | The paper mentions training and testing data but does not explicitly describe a validation dataset split or how it was used in the experimental setup. |
| Hardware Specification | Yes | All calculations were constructed using one NVIDIA A100 GPUs for MNIST and CIFAR-10, while Imagenet test and training time evaluations employed two. |
| Software Dependencies | No | The paper states that models were trained in "Py Torch (Paszke et al., 2019)", but it does not provide a specific version number for PyTorch or any other software dependencies. |
| Experiment Setup | Yes | The confidence intervals of expectations in all experiments were set according to the α = 0.005 significance level. ... Table 3: Parameter space employed for our Certification Aware Attack, PGD (see Equation 12 for details), and Carlini Wagner (see Equation 13). Ours ϵmin 255 = {1, 5, 10} ϵmax 255 = {20, 40, 100, 255} δ = {0.01, 0.025, 0.05, 0.075, 0.1} |