Efficient Certified Defenses Against Patch Attacks on Image Classifiers
Authors: Jan Hendrik Metzen, Maksym Yatsura
ICLR 2021 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform an empirical evaluation of BAGCERT on CIFAR10 (Krizhevsky, 2009) and Image Net (Russakovsky et al., 2015). |
| Researcher Affiliation | Industry | Jan Hendrik Metzen & Maksym Yatsura Bosch Center for Artificial Intelligence, Robert Bosch Gmb H Robert-Bosch-Campus 1, 71272 Renningen, Germany |
| Pseudocode | No | The paper describes its methods in prose and through figures, but does not include any clearly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not include an unambiguous statement or a link indicating that the source code for the described methodology is publicly available. |
| Open Datasets | Yes | We perform an empirical evaluation of BAGCERT on CIFAR10 (Krizhevsky, 2009) and Image Net (Russakovsky et al., 2015). |
| Dataset Splits | Yes | On CIFAR10, BAGCERT certifies 10.000 examples in 43 seconds on a single GPU... For training, we use the Adam optimizer with learning rate 0.001, batch size 96, and train for 350 epochs... Moreover, we apply random horizontal flips and random crops with padding 4 for data augmentation. On Image Net... Running certification for the entire validation set of 50.000 images takes roughly 7 minutes. |
| Hardware Specification | Yes | BAGCERT requires (depending on its receptive field size) only between 39.0 and 48.5 seconds for certifying all 10.000 test examples on a single Tesla V100 SXM2 GPU |
| Software Dependencies | No | The paper mentions various algorithms and architectural components like Adam optimizer, ResNet, and Shake-Shake regularization, but it does not specify software versions for libraries, frameworks, or programming languages (e.g., PyTorch 1.9, Python 3.8). |
| Experiment Setup | Yes | For training, we use the Adam optimizer with learning rate 0.001, batch size 96, and train for 350 epochs. We apply a cosine decay learning rate schedule (Loshchilov & Hutter, 2017) with a warmup of 10 epochs. Moreover, we apply random horizontal flips and random crops with padding 4 for data augmentation. ...weight decay is 0.0, and the one-hot penalty is set to σ = 0.0. For training, we use the Adam optimizer with learning rate 0.00033, batch size 64, and train for 60 epochs. |