On Certifying Non-Uniform Bounds against Adversarial Attacks
Authors: Chen Liu, Ryota Tomioka, Volkan Cevher
ICML 2019 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We provide experimental evidence in Section 4... In this Section, we compare our certified non-uniform bounds with uniform bounds. We also use our algorithm as a tool to explore the decision boundaries of different models. All the experiments here are implemented in the framework of Py Torch and can be finished within several hours on a single NVIDIA Tesla GPU machine. |
| Researcher Affiliation | Collaboration | 1EPFL, Lausanne, Switzerland 2Microsoft Research, Cambridge, UK. |
| Pseudocode | Yes | Algorithm 1 Bound Estimation |
| Open Source Code | No | The paper does not provide a direct link to the source code for the methodology described, nor does it explicitly state that the code will be made publicly available. |
| Open Datasets | Yes | real datasets, including MNIST, Fashion-MNIST and SVHN (Netzer et al., 2011). |
| Dataset Splits | Yes | 90% of the data points are in the training set and the rest are reserved for testing. |
| Hardware Specification | Yes | All the experiments here are implemented in the framework of Py Torch and can be finished within several hours on a single NVIDIA Tesla GPU machine. |
| Software Dependencies | No | The paper mentions 'Py Torch' as the framework for implementation but does not specify its version or any other software dependencies with version numbers. |
| Experiment Setup | Yes | We set the perturbation budget of PGD to be 0.1 and search for adversarial examples for 20 iterations. |