Robustness Certificates for Sparse Adversarial Attacks by Randomized Ablation
Authors: Alexander Levine, Soheil Feizi4585-4593
AAAI 2020 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | Experimentally, on MNIST, we can certify the classifications of over 50% of images to be robust to any distortion of at most 8 pixels. [...] In this section, we provide experimental results of the proposed method on MNIST, CIFAR-10, and Image Net. |
| Researcher Affiliation | Academia | Alexander Levine, Soheil Feizi University of Maryland, College Park {alevine0, sfeizi}@cs.umd.edu |
| Pseudocode | No | The paper describes procedures and theoretical foundations but does not include a dedicated pseudocode block or algorithm listing. |
| Open Source Code | Yes | Code and supplementary material is available at https://github.com/alevine0/randomizedAblation/. |
| Open Datasets | Yes | Experimentally, on MNIST, we can certify the classifications... In this section, we provide experimental results of the proposed method on MNIST, CIFAR-10, and Image Net. |
| Dataset Splits | No | The paper mentions using a 'test set' for MNIST/CIFAR and the 'ILSVRC2012 validation set' for testing ImageNet, but it does not provide explicit details about standard training/validation/test splits (e.g., percentages or sample counts for validation data) for all datasets needed for reproduction. |
| Hardware Specification | No | The paper does not provide specific hardware details such as GPU models, CPU types, or memory specifications used for running the experiments. |
| Software Dependencies | No | The paper mentions using ResNet architectures and the Foolbox package, but it does not provide specific version numbers for any software dependencies required to reproduce the experiments. |
| Experiment Setup | Yes | Unless otherwise stated, the uncertainty α is 0.05, and 10,000 randomly-ablated samples are used to make each prediction. [...] We use 1,000 and 10,000 samples, respectively, for these two steps. [...] For performance reasons, during training, we ablate the same pixels from all images in a minibatch. We use the same retention constant k during training as at test time. [...] for greyscale images where pixels in S are floating point values between zero and one (i.e. S = [0, 1]), we encode s S as the tuple (s, 1 s), and then encode NULL as (0, 0). |