Robustness of classifiers: from adversarial to random noise
Authors: Alhussein Fawzi, Seyed-Mohsen Moosavi-Dezfooli, Pascal Frossard
NeurIPS 2016 | Conference PDF | Archive PDF | Plain Text | LLM Run Details
| Reproducibility Variable | Result | LLM Response |
|---|---|---|
| Research Type | Experimental | We perform experiments and show that the derived bounds provide very accurate estimates when applied to various state-of-the-art deep neural networks and datasets. |
| Researcher Affiliation | Academia | École Polytechnique Fédérale de Lausanne Lausanne, Switzerland {alhussein.fawzi, seyed.moosavi, pascal.frossard} at epfl.ch |
| Pseudocode | No | The paper does not include any explicitly labeled pseudocode or algorithm blocks. |
| Open Source Code | No | The paper does not provide any explicit statements about releasing source code for the methodology, nor does it include a link to a code repository. |
| Open Datasets | Yes | Le Net (MNIST), Le Net (CIFAR-10), VGG-F (Image Net), VGG-19 (Image Net) |
| Dataset Splits | No | The paper mentions 'test set' (D) for evaluating β(f; m), but it does not provide specific details on training/validation splits, percentages, or sample counts. |
| Hardware Specification | Yes | We also gratefully acknowledge the support of NVIDIA Corporation with the donation of the Tesla K40 GPU used for this research. |
| Software Dependencies | No | The paper does not specify any software dependencies with version numbers. |
| Experiment Setup | No | The paper evaluates pre-existing state-of-the-art classifiers (LeNet, VGG-F, VGG-19) on various datasets but does not provide specific details on hyperparameters, training configurations, or model initialization used for these classifiers in their experiments. |